Over lunch with a high school buddy, I mentioned a college classmate's death from pancreatic cancer, not long after he had received a Nobel prize for his own groundbreaking cancer research." My friend, a successful advertising executive, visibly shaken, asked, "How could he have died? He must have known everyone."
"You're kidding," I said. "Pancreatic cancer is the very definition of bad luck — hard to detect early, hard to treat, a generally grim prognosis." I was quickly drowned out by his rapid-fire follow-up questions: "Are there any good screening tests? Biomarkers? dietary precautions I can take? Surely there must be something."
Annoyed that yet another lunch was being ruined by health anxieties, I blurted out, "It's nothing personal, but pancreatic cancer is just one of a zillion sneaky diseases lurking in the wings. At our age, we would both be better off embracing hopelessness."
"Wonderful," said my friend. "My condescending doctor friend feels obliged to enlighten me by arguing that the key to life is to admit defeat. And to think you once actually took care of patients." He pulled out some change to cover his half of the lunch and left me to pay the tip.
Alone with my coffee, I wondered why I would have intentionally provoked my friend. I wouldn't wish the feeling of hopelessness on my worst enemy. After all, feeling hopeful is purpose's handmaiden, an involuntary mental state, like love or joy, that softens reality's sharp edges. But the state of hopelessness — the sober, evidence-based recognition that nothing further can be done — now that's another story.
Years ago, in a book on pathological altruism — how believing that you are helping others can result in unanticipated harm — I described a brilliant oncologist who, hell-bent on prolonging the life of each of his patients, often turned a deaf ear to their pleas that "enough is enough." On numerous occasions I tried but failed to dissuade him from pursuing what I and others thought was overly aggressive intervention. The final straw was his insistence that I perform a lumbar puncture on a clearly terminal patient. I argued that the procedure was certain to cause the patient discomfort, with a negligible chance that it would affect his outcome. He countered that if I refused, he would do the puncture himself. I gave in; the patient suffered a post-spinal tap headache that persisted for his last three days. That was a dual cognitive blunder: the oncologist's unwillingness to accept his patient's imminent death was rivaled by my persistent inability to acknowledge the utter hopelessness of trying to convince him otherwise.
To get a sense of how difficult it is to fully embrace obvious hopelessness, I'm reminded of the time I "went broke" in a high-stakes poker game in Las Vegas. Fresh out of residency and still burdened with debt, I'd squirreled away enough winnings from our small-stakes home games to pony up a single buy-in in the "big game" held during the annual World Series of Poker. Shortly after sitting down, I found myself involved in the largest pot I had ever played. When all the cards had been dealt, I had an almost sure winner. I bet, my opponent raised, I re-raised, and my notoriously conservative opponent, after a dramatic pause, shoved in the remainder of his chips.
I realized that he had to have the one hand that could beat me. Though it was obvious that I had no chance of winning, if I folded I'd never know for certain. Having never been in this spot before, I couldn't shake the remote possibility that he had misread his hand or was making an uncharacteristically high-level bluff. For what seemed like forever, I sat motionless as unemotional probabilities jousted with wishful thinking. Of course, reason eventually failed; I called and lost.
"Sorry, kid, but you had to call," the winner said as he scooped up my chips. "You had too much money invested in the pot." He patted me on the back. "I suppose you're right," I said, getting up and starting for the door. When I was presumably out of earshot, the winner said to the other players at the table, "Throwing good money after bad — what a fish." The others laughed.
* * *
I'm watching a panel of TV talking heads outline the various reasons why Republicans and Democrats are constitutionally incapable of finding common ground. The pundits glumly acknowledge that the two parties exist in alternative universes governed by incompatible principles and diametrically opposed facts. Nevertheless, despite being unable to suggest any practical steps forward, they conclude with the self-canceling phrase, "Even so, I remain hopeful."
Really? Hopeful of what? Given their convincing skeptical arguments, why on earth should we share their unjustified sense of optimism? Imagine a simple litmus test: a national betting forum in which experts were forced to place wagers on their opinions. If they were unwilling to bet any of their hard-earned dollars, we would have an independent measure of their actual degree of hopefulness.
Moving down to the personal level: would you be willing to bet that we will soon see major improvements in our educational system, stricter gun control, a revitalized power grid, highways and bridges, high-speed transit systems, an improved health care system? That additional evidence or more convincing lines of reasoning will alter the views of creationists, atheists, climate change and Holocaust deniers or anti-vaxxers or, conversely, dissuade hardcore rationalists who insist that we will one day understand how consciousness arises, and that a foolproof "theory of everything" is imminent? Though none of my politically savvy friends and colleagues have bitten on this proposition, no matter how favorable the odds I've offered, they continue to passionately debate and argue the nuances of a better future they do not doubt will occur.
The point is obvious but bears repeating: To recognize the myriad ways in which so-called rational discourse has failed us, and yet to act as though change is just around the corner, is the same type of misplaced hope that propelled the oncologist to deny that his patient was beyond treatment and why I lost my Las Vegas bankroll because I could not fold what I unequivocally knew to be a losing hand.
As a practicing physician, I have witnessed this conflict between emotional optimism and a dispassionate recognition of futility contribute to many of medicine's onerous excesses. Case in point: unnecessary back surgeries performed because the surgeon cannot overcome his gut feeling that the procedure might work despite the lack of objective evidence. The same dynamic applies to failed interpersonal relationships. You glumly conclude that your spouse is serially unfaithful, abusive or hopelessly addicted to alcohol or drugs, but persist with the belief that perhaps he or she will change. Ditto for dealing with a troubled, persistently rebellious teenage child: When a therapist deems your child incorrigible and recommends commitment to a rehab program, you are forced to choose between tough love and false hope.
Admitting defeat is antithetical to our default tendency to delude ourselves when times are bad, even when the negative data is indisputable. Wherever measurable, from life expectancy and quality of health care to literacy in math and science, the world ranking of the United States is in free fall. The logical conclusion: It's time for a societal hard love project.
No, you might counter; things will be better when cooler heads prevail. Perhaps we can gather better evidence, generate more convincing arguments, work harder toward bipartisan compromise, wait for better and more widespread educational opportunities to kick in … This commonly held bedrock belief in the power of reason to shape public opinion is understandable; our founders, fully steeped in the Enlightenment-era emphasis on rationality, could not have anticipated future cognitive science advances revealing the many deceptive ways in which conscious experience arises from perceptual illusions. We are in the process of learning that our sense of self, our agency (so-called free will), and, most importantly, our sense of thinking and assessing or judging our ideas are purely involuntary mental sensations that paradoxically create the illusion of being in conscious control of our thoughts and actions.
* * *
I confess to a certain discomfort in arguing that conscious deliberation is strictly an epiphenomenon that plays no role in our decision-making. In the past I have been willing to accept that there may be a small conscious component to our thoughts that we can use to improve critical thinking. I no longer feel, however, that clinging to this unprovable fanciful notion is even a useful fiction. As the distinction between conscious versus subliminal control over our behavior is critical to mankind's future, a few words of explanation are in order.
We readily accept that perception occurs involuntarily, but tend to view reason, though arising from similar subconscious processing, as at least partially in control of its origins and premises. As we experience the flow of thought via symbols such as language or numbers, it is only natural that we assume they are the building blocks of our thoughts. Not so. As we can see from preverbal infants and other animals, language is not necessary for thought. What we experience as conscious thought — the vocabulary of reasoning — is at best a rough translation of poorly understood non-linguistic brain processes.
In my 2008 book "On Being Certain," I offered the artificial neural network (ANN)-based analogy of decision-making as the product of a subliminal committee weighing various alternatives and then sending the most appealing of them into consciousness. To highly simplify this idea, imagine each committee member as a set of neural connections representing a single genetic or innate biological predisposition, personal experience or cultural influence. Each committee member gets one vote either approving or disapproving of a piece of incoming information. The committee's final tally is a function of its inherent open-mindedness, the prevailing strength of its already-acquired opinions and beliefs, motivations of the various committee members, and the degree to which the members of the committee value evidence-based reasoning over other modes of decision-making, such as reliance on trusted authorities and prevailing dogma. The power of conviction of this new information to sway committee members will determine whether this new information reaches awareness.
Decide whether to take your family vacation in the mountains or at the seashore. No matter what reasons you may provide and your spouse or children counter with, they are post-hoc rationalizations for personal tastes no different than the preference for chocolate over vanilla ice cream. Traditional modes of discourse — from polite debate to high-decibel exhortations — are no more likely to change another's tastes than trying to prove that Brussels sprouts taste sweet or bitter (a distinction that has recently been shown to be genetically determined). The essential stumbling block of modern discourse: Your reasoning may not be my reasoning any more than your tastes are my tastes.
* * *
I cannot imagine a more impossible assignment than changing how we view our thoughts. And yet, if there is to be real hope for a better collective future, we need to come up with fresh approaches that are both scientifically plausible and generally palatable. Though I have no ready suggestions, we can draw a few hints from observing nonhuman ways of thinking. Two tantalizing examples come immediately to mind: artificial intelligence deep learning and insect swarm behavior.
To begin with AI, consider the rudimentary necessities for an artificial neural network (ANN), using algorithms inspired by the human brain to learn to play chess. No advance knowledge of chess is necessary. Given a clear designation of purpose (winning) and an immense amount of training data (games played) providing appropriate feedback as to the best moves, the initially ignorant ANN will soon beat the world's greatest chess masters. (Of course, AI can only address those problems for which there is sufficient objective data; subjective issues such as human character, ethics and morality remain beyond its reach).
These two basic requirements — a large amount of uncensored data and clarity of purpose — highlight major differences between human and machine thought. Unlike machines, our unique predispositions and different cultural influences generate highly personal hunches, intuitions and beliefs that collectively prejudge the potential value of any incoming piece of information. By contrast, the ANN initially considers every possible move, no matter how seemingly ridiculous and nonsensical to an outside observer, until it has been empirically tested.
The second prerequisite of a deep learning AI system — clarity of purpose — points out a different version of the same problem. Unlike single-purpose algorithms designed to win at chess or poker, human motivation is multifaceted, inconsistent and often contradictory. Even when we believe in the single-mindedness of our goal — winning at our Friday night poker game — we often play sub-optimally, submarined by contrary urges such as making a low-probability bluff to humiliate an irritating opponent, or playing a bad hand with the low=probability but highly appealing possibility of making a straight flush. Unfortunately, as introspection and self-reflection arise from the same opaque circuitry that we are trying to examine, our self-knowledge boils down to trust and acceptance of those subliminally generated self-narratives that make their way into consciousness. (As I've written previously, our assessment of the motivation of others, based upon putting our own often-inaccurate sense of self into the shoes of another, is even more suspect).
Some successful features of AI in comparison to human thought are worth emphasizing: There is no censoring of incoming information, reliance upon gut feelings, pride in untestable intuitions and unreliable claims of motivation — the bitter fruits of mistakenly believing that we can objectively judge our thoughts. But there's more than observing thinking at an individual level; we also need to consider group influences. For example, witness the dramatic behavioral shifts in locusts when subjected to crowded conditions.
During the dry season, locusts lead isolated relatively antisocial lives, shying away from contact with others and living off a limited plant diet. Then, when the rains come and vegetation blooms, they breed and their population soars. While the food supply is plentiful, they remain solitary vegetarians. When the rain stops and the vegetation dries up, the increased number of locusts crowd together in areas of remaining vegetation. This increased contact triggers a variety of stunning behavioral changes. They abandon their normally solitary behavior to seek out one another's company, and then start reproducing explosively to form massive swarms. Their leg muscles enlarge, and they begin marching movements in time with the other locusts. Their brain size increases by 30 percent, primarily in areas of visual processing necessary to cope with the group foraging rather than solitary food finding. Even their external appearance and color changes. Within hours the locusts are transformed from solitary plant eaters to synchronized, swarming cannibalistic devourers of their brethren.
Though we cannot know what if anything a locust experiences consciously, imagine what it might be thinking if it had a mind capable of self-reflection. Might it question what came over it to go from being a loner to suddenly seeking out crowds and wanting to mate like crazy, or why it has forsaken its healthy plant diet for gross eating of its brethren's flesh? How would it interpret its radical shift in social behavior, sexual promiscuity and indifference to the plight of others?
Science to the rescue. Researchers have shown that this shift in locust behavior is triggered by stroking small tufts of hair located on the locusts' hind legs — the region that most frequently comes into contact with other locusts when they are in close proximity. Stimulation of these hairs creates an outpouring of the brain neurotransmitter serotonin; blocking the serotonin release prevents the swarming behavior.
How extraordinary that, in a Rube Goldberg-like sequence of events, increased population density leads to physiological brain and muscle changes that alter perception and behavior. Have you ever wondered what the crowds converging on Miami Beach during the height of the pandemic were thinking when they shunned mask-wearing and social distancing, caught up in the moment of seeking the company of others, perhaps even with the possibility of getting lucky and "hooking up?" Or the frenzied behavior at a political rally or international soccer game? Closer to home, have you ever been exiting a crowded stadium or theater and found yourself taking short marching steps to accommodate the milling crowds surrounding you? You are sure that you have voluntarily chosen to take smaller steps to avoid others. But what if you and the crowds at Miami Beach or the stormers of the Capitol on Jan. 6 were responding reflexively to a sudden shift in their levels of neurotransmitters? As agency is a perceptual illusion, how are we to distinguish between personal choice and indifferent biology? That crowds can structurally change brain anatomy and behavior should be a both a cautionary tale and a clue as to how we should reconsider human thought going forward.
* * *
The above comments are not intended to in any way denigrate the value of reason, only to relocate its site of origin. Though there is no conscious control center for the mind, this does not mean that we cannot change our minds by appealing to our senses. One photo of a fatal car crash carries more weight than hours of traffic-ticket-school lectures on the evils of speeding. The smell of baked goods can enhance our desire to be charitable. A close reading of ancient Stoicism evokes an unexpected personal epiphany of acceptance of life's circumstances.
But learning the appeal of an elegant line of reasoning runs into the more basic problem that critical thinking, like any skill, is easier and more enjoyable for some than others. At one extreme, there are those for whom a lifetime of rumination and cogitation offers an unparalleled sense of meaning. For others, hard thought is a deeply unpleasant slog that cannot hold a candle to gut feelings, the warm comfort of communal beliefs and the unfettered promises of propaganda and demagoguery.
Even the best reasoning skills are not enough to arrive at a consensus opinion on the major issues of the day. Once we fully accept that critical thinking develops outside of conscious control, it becomes self-evident why the smartest among us, even when presented with the same evidence, prefer different lines of reasoning that often result in conflicting arguments and conclusions. Case in point: the widely disparate theories cluttering the field of philosophy of mind, from the diametrically opposed views of free will to the underlying nature of consciousness. We are better off seeing different modes of thinking in the same aesthetic light as personal tastes for or against Brussels sprouts, preferring Scotch to boxed white wine or switching from being a Yankees fan to rooting for the Mets.
For a moment try to imagine the utter chaos of a world that fully accepted that our thoughts occur to us rather than being consciously generated. There would be no agreement as to what constituted good science, expertise, real versus fake news, correct logic, unequivocal proof or degree of personal responsibility for our thoughts and actions. The fundamental tenets of democracy — freedom to choose, equal value of each vote, what constitute the inalienable rights of the individual — would all be profoundly challenged. In short, it would look just like today's world.
But with a difference. We have been sold an unwarranted bill of goods as to our uniqueness in the animal kingdom. Like all other creatures, we are decision-making organisms, not rational agents. Our use of language and numbers and the ability to think about our thinking (metacognition), no matter how spectacular and profound, is as subliminal in origin as a termite's ability to build a termite mound.
Forget hostile debate and impassioned oratory. A willingness to change our minds requires a deeply felt acceptance that our decision-making arises out of impossible-to-fully-unravel subterranean inclinations. To get to a "we're all in this together" communal spirit, we must fully abandon our sense of pride, defensiveness and certainty in our thoughts, or even our conviction that our thoughts are solely of our own choosing (think of the locust example). My unwarranted wishful thought: perhaps stepping back from our favorite arguments will allow a glimpse of a shared humanity lurking beneath conflicting urges and ideologies. It's hard to imagine, and even harder to bet on, but just maybe — and the slightest perhaps is still better than nothing, which is why I retain a modest bit of hope in the face of the utter hopelessness of our times.