Cities without landmarks
Niagara Falls, U.S./Canada
Certainty is everywhere. Fundamentalism is in full bloom. Legions of authorities cloaked in total conviction tell us why we should invade country X, ban “The Adventures of Huckleberry Finn” in schools, eat stewed tomatoes, how much brain damage is necessary to justify a plea of diminished capacity, the precise moment when a sperm and an egg must be treated as a human being, and why the stock market will revert to historical returns. A public change of mind is national news.
But why? Is this simply a matter of stubbornness, arrogance or misguided thinking, or is the problem more deeply rooted in brain biology? Since my early days in neurology training, I have been puzzled by this most basic of cognitive problems: What does it mean to be convinced? This question might sound foolish. You study the evidence, weigh the pros and cons, and make a decision. If the evidence is strong enough, you are convinced there is no other reasonable answer. Your resulting sense of certainty feels like the only logical and justifiable conclusion to a conscious and deliberate line of reasoning.
But modern biology is pointing in a different direction. It is telling us that despite how certainty feels, it is neither a conscious choice nor even a thought process. Certainty and similar states of “knowing what we know” arise out of primary brain mechanisms that, like love or anger, function independently of rationality or reason. Feeling correct or certain isn’t a deliberate conclusion or conscious choice. It is a mental sensation that happens to us.
The importance of being aware that certainty has involuntary neurological roots cannot be overstated. If science can shame us into questioning the nature of conviction, we might develop some degree of tolerance and an increased willingness to consider alternative ideas — from opposing religious or scientific views to contrary opinions at the dinner table.
I call the mental sensation of certainty the “feeling of knowing.” Everyone is familiar with the most commonly recognized feeling of knowing. When asked a question, you feel strongly that you know an answer that you cannot immediately recall. Psychologists refer to this easily recognizable feeling as a tip-of-the-tongue sensation. The frequent accompanying comment as you scan your mental Rolodex for the forgotten name or phone number is: “I know it but I just can’t think of it.” You are aware of knowing something, without knowing exactly what this sensation refers to. The most profound feeling of knowing is the “aha,” a spontaneous notification from a subterranean portion of our mind, an involuntary all-clear signal that we have grasped the heart of a problem. It isn’t just that we can solve the problem; we “know” that we understand it.
To understand what I mean about the feeling of knowing, read the following paragraph at normal speed. Don’t skim, give up halfway through or skip to the explanation. Because this experience can’t be duplicated once you know the explanation, take a moment to ask yourself how you feel about the paragraph. After reading the clarifying word, reread the paragraph. As you do, pay close attention to the shifts in your mental state and your feeling about the paragraph:
A newspaper is better than a magazine. A seashore is a better place than the street. At first it is better to run than to walk. You may have to try several times. It takes some skill but it is easy to learn. Even young children can enjoy it. Once successful, complications are minimal. Birds seldom get too close. Rain, however, soaks in very fast. Too many people doing the same thing can also cause problems. One needs lots of room. If there are no complications it can be very peaceful. A rock will serve as an anchor. If things break loose from it, however, you will not get a second chance.
Is this paragraph comprehensible or meaningless? Feel your mind sort through potential explanations. Now watch what happens with the presentation of a single word: kite.
In an instant, you are flooded with the “aha” feeling that the paragraph makes sense. There’s no time for deep consideration and evaluation. Before you can reread the paragraph, your unconscious mind has already sorted through various possibilities, determined that the sentences collectively fit the description of a kite and sent you notification.
Determining how this involuntary feeling of knowing happens takes us into the enormously complicated details of neurobiology. To simplify them for this discussion, let me borrow a term, “hidden layer,” from the artificial intelligence community.
By mimicking the way the brain processes information, A.I. scientists have been able to build artificial neural networks (ANNs) that can play chess and poker, read faces, recognize speech and recommend books at Amazon.com. While standard computer programs work line by line, yes or no, all eventualities programmed in advance, the ANN takes an entirely different approach. The ANN is based upon mathematical programs that are initially devoid of any specific values. The programmers only provide the equations; incoming information determines how connections are formed and how strong each connection will be in relationship to all other connections. There is no predictable solution to a problem — rather, as one connection changes, so do all the others. These shifting interrelationships are the basis for “learning.”
With an ANN, the hidden layer is conceptually located within the interrelationships between all the incoming information and the mathematical code used to process it. In the human brain, the hidden layer doesn’t exist as a discrete interface or specific anatomic structure; rather, it resides within the connections between all neurons involved in any neural network. A network can be relatively localized or widely distributed throughout the brain. Proust’s taste of a madeleine triggered a memory that involved visual, auditory, olfactory and gustatory cortices — the multisensory cortical representations of a complex memory. With a sufficiently sensitive fMRI scan, we would see all these areas lighting up when Proust contemplated the madeleine.
The hidden layer thus offers a powerful metaphor for the way the brain processes information. It is in the hidden layer that all elements of biology (from genetic predispositions to neurotransmitter variations and fluctuations) and all past experience, whether remembered or long forgotten, affect the processing of incoming information. It is the interface between incoming sensory data and a final perception, the anatomic crossroad where nature and nurture intersect. It is why your red is not my red, your idea of beauty isn’t mine, why eyewitnesses offer differing accounts of an accident or why we don’t all put our money on the same roulette number.
The powerful feeling of knowing arises out of the hidden layer’s unconscious calculation of correctness, be it recognizing a face or believing an idea is right. The greater the likelihood of correctness, as determined by your unconscious, the stronger the sense of certainty.
In his bestselling “Blink,” New Yorker staff writer Malcolm Gladwell describes gut feelings as “perfectly rational,” as “thinking that moves a little faster and operates a little more mysteriously” than conscious thought. But he’s flying in the face of present-day understanding of brain behavior. Gut feelings and intuitions, the Eureka moment and our sense of conviction, represent the conscious experiences of unconsciously derived feelings.
Look at the feeling of knowing in the light of evolution. It explains how we learn. Compare it with the body’s various sensory systems. It is through sight and sound that we are in contact with the world around us. Similarly, we have extensive sensory functions for assessing our interior milieu. When our body needs food, we feel hunger. When we are dehydrated and require water, we feel thirsty. If we have sensory systems to connect us with the outside world, and sensory systems to notify us of our internal bodily needs, it seems reasonable that we would also have a sensory system to tell us what our minds are doing.
To be aware of thinking, we need a sensation that tells us that we are thinking. To reward learning, we need feelings of being on the right track, or of being correct. And there must be similar feelings to reward and encourage the as-yet unproven thoughts — the idle speculations and musings that will become useful new ideas.
To be an effective, powerful reward, the feeling of conviction must feel like a conscious and deliberate conclusion. As a result, the brain has developed a constellation of mental sensations that feel like thoughts but aren’t. These involuntary and uncontrollable feelings are the mind’s sensations; as sensations they are subject to a wide variety of perceptual illusions common to all sensory systems. Understanding this couldn’t be more important to our sense of ourselves and the world around us.
It’s not easy, of course, but somehow we must incorporate what neuroscience is telling us about the limits of knowing into our everyday lives. We must accept that how we think isn’t entirely within our control. Perhaps the easiest solution would be to substitute the word “believe” for “know.” A physician faced with an unsubstantiated gut feeling might say, “I believe there’s an effect despite the lack of evidence,” not, “I’m sure there’s an effect.” And yes, scientists would be better served by saying, “I believe that evolution is correct because of the overwhelming evidence.”
I realize that this last sentence runs against the grain of those who have fought the hardest to establish science as the method for determining the facts of the external world. It is particularly loathsome when you feel that you are playing into the hands of religious fanatics, medical quacks and word-twisting politicians. But in pointing out the biological limits of reason, including scientific thought, I’m not making the case that all ideas are equal or that scientific method is mere illusion. My purpose is not to destroy the foundations of science, but only to point out the inherent limitations of the questions that science asks and the answers it provides.
Substituting believe for know doesn’t negate scientific knowledge; it only shifts a hard-earned fact from being unequivocal to being highly likely. Saying that evolution is extremely likely rather than absolutely certain doesn’t reduce the strength of the argument, and at the same time it serves a more fundamental purpose. Hearing myself saying “I believe” where formerly I would have said “I know” serves as a constant reminder of the limits of knowledge and objectivity. At the same time as I am forced to consider the possibility that contrary opinions might have a grain of truth, I am provided with the perfect rebuttal for those who claim that they “know that they are right.” It is in the leap from 99.99999 percent likely to 100 percent guaranteed that we give up tolerance for conflicting opinions, and provide the basis for the fundamentalist’s claim to pure and certain knowledge.
A related consideration is to distinguish between felt knowledge — such as hunches and gut feelings — and knowledge that arises out of empiric testing. Any idea that either hasn’t been or isn’t capable of being independently tested should be considered a personal vision. Shakespeare does not demand that we accept Hamlet as representing a universal truth. We agree and judge him according to the standards of art, literature and personal experience. Hamlet is neither right nor wrong. If in the future, Hamlet is found to have a gene for bipolar disorder, we are entitled to reassess our initial interpretations of Hamlet’s relationship to his mother. Hamlet is a vision. No matter how seemingly reasonable and persuasive, each begins with a very idiosyncratic perception that seeks its own reflection in the external world. Each writer’s personal sense of purpose drives the arguments, picks out the evidence and draws conclusions. Such ideas should be judged accordingly — as visions, not as obligatory lines of reasoning that must be universally shared.
To retreat from claims of absolute “knowing” and certainty, popular psychology needs to explore how mental sensations play a fundamental role in generating and shaping our thoughts. We can’t afford to continue with the outdated claims of a perfectly rational unconscious or knowing when we can trust gut feelings. We need to rethink the very nature of a thought, including the recognition of how various perceptual limitations are inevitable.
At the same time, if the goal of science is to gradually overcome deeply embedded superstition, it must be seen as a more attractive and comforting alternative, not as inflammatory exhortation and confrontation with a none-too-subtle whiff of condescension. Try to peddle the vision of a cold, pointless world at a Pentecostal revival meeting and you have an inkling of the challenge. In a recent survey, nearly 90 percent of Americans expressed the belief that their souls will survive the death of their bodies and ascend to heaven. Such beliefs, no matter how counter to the evidence, provide the majority of Americans with a personal sense of meaning. If forced to choose between reason and a sense of purpose, most of us would side with purpose. This apparent choice isn’t even an entirely conscious decision. If science hasn’t yet made a dent in such beliefs, it seems unlikely that further efforts will miraculously turn the tide.
Such discussions pose the same ethical problems inherent in placebo treatments. Simply put, a placebo effect is a false belief that has real value. To insist that there is no soul or afterlife is the moral equivalent of taking away the placebo effect arising out of an unscientific belief. Studies have shown that sham arthroscopic surgery can allow some patients to walk comfortably again. No one should recommend sham knee surgery, yet many physicians are comfortable recommending less drastic but unproven treatments for pain.
The answer lies in a personal risk-reward calculation — how to provide comfort without undue side effects or cost. But the intentional use of a placebo comes at a cost. Even without side effects or excessive cost, the precedent of falsely representing benefits of a treatment has its own long-term undesirable effects. The most serious would be the erosion of trust between the physician and patient. On the other hand, eliminating all placebo treatments because they are intellectually dishonest raises its own set of problems, including the cynical zeitgeist of valuing science over compassion. There isn’t an easy solution or right answer; each of us will calculate the risk versus reward according to our own biology and experience.
In medicine, we are increasingly developing ethical standards for complex medical decisions that allow for hope and the placebo effect, yet don’t fly in the face of evidence-based medical knowledge. The guiding principle of the Hippocratic oath is primum no nocerum — above all, do no harm. This same principle should be a cornerstone of how science competes in the world of ideas. Science needs to maintain its integrity while it retains compassionate respect for aspects of human nature that aren’t “reasonable.”
This balance of opposites extends to all aspects of modern thought. For example, it doesn’t make sense to ask someone if he’d like to take a placebo; the very question strips the placebo of much of its intended benefit. Similarly, it isn’t clear how to have a reasonable discussion on the nature of the self that both maintains the integrity of science — the self is an emergent phenomenon and not some separately existing entity — and allows each of us to feel that we are individuals and not mere machinery. I cannot imagine a world in which we fully accepted and felt that we were nothing more than fictional narratives arising out of “mindless” neurons. And I cannot imagine how much empathy we would have with others if we saw disappointment, love and grief solely as chemical reactions. Faced with this chilling interpretation of our lives, it isn’t surprising that most people opt for the belief in material “souls” and/or anticipate that real live virgins are patiently awaiting their arrival in heaven.
F. Scott Fitzgerald described an easy-to-accept but difficult-to-accomplish solution: “The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time and still retain the ability to function.” This juggling act requires us to keep in mind what science is telling us about ourselves while acknowledging the positive benefits of nonscientific or unreasonable beliefs. Each opposing position has its own risks and rewards; both need to be considered and balanced within the overarching mandate — above all, do no harm.
Just as we learn to cope with the anxieties of sickness and death, we must learn to tolerate contradictory aspects of our biology. Our minds have their own agendas. We can intervene through greater understanding of what we can and cannot control, by knowing where potential deceptions lurk, and by a willingness to accept that our knowledge of the world around us is limited by fundamental conflicts in how our minds work.
Which leads us back to the beginning. Certainty is not biologically possible. We must learn (and teach our children) to tolerate the unpleasantness of uncertainty. Science has given us the language and tools of probabilities. That is enough. We do not need and cannot afford the catastrophes born out of a belief in certainty.
From “On Being Certain” by Robert A. Burton, M.D. © 2008 by the author and reprinted by permission of St. Martin’s Press.
Robert Burton, M.D., is the former chief of neurology at Mount Zion-UCSF Hospital and the author of "On Being Certain: Believing You Are Right Even When You're Not." His column, "Mind Reader," appears regularly in Salon.More Robert Burton.
Niagara Falls, U.S./Canada
Sydney Opera House, Sydney, Australia
Mount Rushmore, South Dakota, U.S.
Eiffel Tower, Paris, France
Colosseum, Rome, Italy
Taj Mahal, Agra, India
Siena Cathedral, Siena, Italy
Christ the Redeemer, Rio de Janeiro, Brazil
Arc de Triomphe, Paris, France
Lost City of Petra, Jordan