Like little stars.
Our memories are wrong at least as often as they are right. At best, they are incomplete, though we might swear other wise. This affects countless aspects of our lives, and in many cases our memories — true or false — affect others’ lives.
Perhaps the most exciting neuroscience discovery of the last several decades is that our brains are not static hunks of tissue but flexible and adaptive organs that change throughout our lives. The term used to describe this new understanding is brain plasticity. The flexibility of your brain is essential to memory and indispensable to learning. Specifically, the “plastic” parts of our brain are synapses — the connection points that allow neurons to transmit signals between each other. Hyper-magnification of synapses in an adult human brain shows various sizes and shapes, some shaped like mushrooms, others shaped more like small hills and others shaped like broad-based mountains. The incredible part is that in your brain and mine, synapses are morphing from one shape to the next depending on the need — how fluid the connection between neurons needs to be, for example — and this continues happening throughout our lives.
Another discovery related to brain plasticity is that the amount of activity between neurons corresponds directly to how strong their connection will continue to be. Cognitive scientists commonly use the phrase “Neurons that fire together wire together.” What this means in terms of memory is that the more intense the activity is between neurons constituting your memory of any given event, the more robust the memory will be. That is one reason why emotionally charged memories frequently percolate to consciousness in vivid detail. I can still remember almost exactly where I was standing outside my high school in Florida just after the space shuttle Challenger exploded. I was looking up at the sky and could see the entire shape of the explosion outlined in smoke.
All “where were you when” moments have a degree of this same intensity, which makes them easier to recall than anything else that was happening at the time. In fact, as I write this, history transpired: The president announced that Osama bin Laden had been killed. I feel as though I now have bookend memories of the horrible events of 9/11. I can recall in detail what I was doing when I first learned of the terrorist attacks in 2001, and now I will have a memory of where I was when the mastermind of that attack was finally dispatched almost 10 years later. These are powerful, emotional events accompanied by intense neural activity. The imprint, though still imperfect, remains with us for a lifetime. That is not true, however, for most memories. Even for the sharper memories born from strong emotions (often called flashbulb memories), time erodes the infrastructure, leaving cracks and gaps. Instead of remembering specific, perfectly accurate details, what constitutes memory over time are general impressions of events with spotty details — and the older we get, the spottier they become.
To figure out why that happens, among other perturbations, cognitive science has tackled memory more aggressively than perhaps any other topic. As a result, we have a growing wealth of research to draw upon to better understand the quixotic art of remembering. What we now know is that our brains happily reconstruct memories, though we are frequently fooled into thinking that the reconstructions are seamlessly recorded recollections.
The bad news is that our memories are anything but concrete and can be altered with relative ease. The good news is that imperfect memory is an evolutionary adaptation that serves our species well much of the time. Loss of memory, and creation of new memory, is central to a relatively efficient system of information processing that never sleeps. The selective movement of information into long-term memory is an adaptive marvel that allows our brains to store crucial pieces of information that we will rely on in the future, and shed information not worth holding onto. The process is not neat and tidy, and memory selectivity often works against us (think about how many memories you would love to forget). But when you view the process through the lens of species survival, it makes unassailable sense. In those terms, it is crucial to remember where the best sources of food are located; where the best hunting grounds are located; which areas to avoid lest you become something else’s dinner; and how to return safely back home. For our ancestors, reliance on memory of particular details was a matter of life and death.
The problem for us moderns is that memory, incredible adaptation though it is, faces relentless challenges in societies driven by information. We simply have too much to remember at any given time, and the vast majority of our brains are not equipped to handle the deluge. Our expectations for what we should be able to recall are hardly in line with what our brains are capable of processing — which, by the way, is an enormous amount.
We have also adopted inaccurate metaphors for memory that lead us to incorrect conclusions. The “bookshelf ” metaphor, for example, which suggests that when we need to recall a memory we simply find it categorized on a mental shelf, ready for consumption. Or the computer metaphor, which suggests that our brains store files on a cerebral hard drive that we can access as we would files on our laptop. These and similar metaphors are wrong for roughly the same reason: Memory does not reside in any one place in our brains, but rather is distributed across multiple brain regions. But because we more easily connect with a metaphor like those above rather than the messy truth of distributed and reconstructed memory, the misunderstandings persist.
Most of us realize that memory is fallible because of the little things that happen all the time. We forget things like car keys, passwords, whether we turned off the oven, and so on. But how many of us would admit that our memory is susceptible to change from the outside? That’s different from simply forgetting — something we all do on our own — because someone else changing our memory requires “getting in our heads,” so to speak, right? The truth is that this sort of outside-in influence does not take very much effort to accomplish — just a few images and a little time.
A study conducted by Linda Henkel in the Department of Psychology at Fairfield University tested whether showing people photos of completed actions — such as a broken pencil or an opened envelope — could influence them to believe they’d done something they had not, particularly if they were shown the photos multiple times. Participants were presented with a series of objects on a table, and for each object were asked to either perform an action or imagine performing an action (e.g., “crack the walnut”). One week later, the same participants were brought back and randomly presented with a series of photos on a computer screen, each of a completed action (e.g., a cracked walnut), either one, two, or three times. Other participants were not shown any photos. One week later, they were brought back to complete a memory test in which they were presented with action phrases (e.g., “I cracked a walnut”) and asked to answer whether they had performed the action, imagined performing it, or neither, and rate their confidence level for each answer on a scale of one to four.
The results showed that the more times people were exposed to a photo of a completed action, the more often they thought they’d completed the action, even though they had really only imagined doing it. Those shown a photo of a completed action once were twice as likely to mistakenly think they’d completed the action than those not shown a photo at all. People shown a photo three times were almost three times as likely to think they had completed the action as those not shown a photo. Two factors in this study speak to the malleability of memory. The first is duration of time. The experiment was carried out with a week between each session, enough time for the specific objects and actions to become a little cloudy in memory, but not enough time to be forgotten. This lines up well with real-world situations, such as someone providing eyewitness testimony, in which several days if not weeks might elapse between recollections of events.
The second factor is repeat exposure to images. The study showed that even just one exposure to a photo of a completed action strongly influenced incorrect memory, while multiple exposures significantly increased the errors. If, for example, you follow a news commentator closely, reading everything he or she writes in whatever venue it appears, you may unknowingly be in a trust trap. Studies have shown that once we invest trust in a particular source of knowledge, we’re less likely to scrutinize information from that source in the future.
A study conducted by Elizabeth Loftus at the University of California, Irvine, and her team took this investigation a step further, showing that the trust trap can also result in the creation of false memories — and not only in the short term. Researchers crafted an experimental design in which they exposed two groups of participants to a series of images followed by narration about the images. The first group (referred to as the “treat-trick” group) received mostly accurate narration about the images. The comparison group received mostly misinformation. Both groups then completed tests of recall to determine how much accurate versus inaccurate information they remembered.
One month later, the participants were brought back to undergo the same experiment, except this time the treat-trick group was given misinformation during the narration (i.e., the “trick”), as was the comparison group. Both groups again completed tests of recall. Here’s what happened: In the first session, the treat-trick group had a significantly higher rate of true memory versus the comparison group (which we would expect since only the comparison group was given misinformation during this session) — at a rate of about 82 percent for the treat-trick group and 57 percent for the comparison group. But in the second session, in which both groups were given misinformation one month later, the treat-trick group had significantly lower true memory recall than the comparison group: 47 percent versus 58 percent. The most likely reason for this effect is that the treat-trick group fell into a trust trap. Because information provided by the narrative source in the first session was accurate (and test scores were high as a result), participants believed the source to be credible and trustworthy.
The comparison group, on the other hand, had no reason to invest trust in the original source and exhibited recall at roughly the same level for both sessions. What’s most interesting is the time frame of this effect. Researchers conducted the sessions a month apart, allowing ample time for a trust effect to wear off. But it didn’t. Once again, we see that the real-world implications of this research are important. Eyewitness testimony can be changed when a witness listens to an information source she has previously trusted as credible (media, interrogators, or other people), and this study suggests that the window of opportunity for this effect is large. Any follow-up information received by an eyewitness from any number of sources can significantly alter his or her memory.
If there’s anything that cognitive psychology studies have made clear over the years, it’s that humans can be exceptionally gullible. With a little push, we’re prone to developing false beliefs not only about others but also about ourselves with equal prowess — and the results can be, well, hard to believe. And at the core of many of these false beliefs live false memories. For example, a study in 2001 asked participants to rate the plausibility of having witnessed demonic possession as children and their confidence that they had actually experienced one. Later, the same participants were given articles describing how commonly children witness demonic possessions, along with interviews with adults who claimed to have witnessed possessions as children. After reading the articles, participants not only rated demonic possession as more plausible than they’d previously said, but also became more confident that they themselves had actually experienced demonic possession as children. Another (less dramatic) study asked participants to rate the plausibility that they’d received barium enemas as children. As with the other study, participants were later presented with “credible” information about the prevalence of barium enemas among children, along with step-by-step procedures for administering an enema. And again, the participants rated the plausibility of having received a barium enema as children significantly higher than they had before.
If you stop and think about it, the ability to construct future scenarios in our minds is really quite remarkable. As far as we know, all other species, including our closest primate relatives, react to events as they occur. They can learn from these events and apply that learning in the future (think of chimpanzees learning how to catch ants by poking a stick into the mound and doing the same thing for every new ant mound they find), but they do not assemble complex pieces of information into a coherent whole of the future.
How exactly do we accomplish this? As with many issues in cognitive science, it is difficult to say for certain, but the latest thinking is that we engage in something called episodic future thinking, which means that we simulate the future by using elements from the past. Recent brain imagining studies show that some of the brain regions that are activated when recalling a personal memory — the posterior cingulate gyrus, parahippocampal gyrus, and left occipital lobe — are also active when thinking about a future event. Our future simulations are obviously not carbon-copy replicas of the past — but we draw on experience to generate the simulations in the same way that a sketch artist uses the pieces of information provided to her to compose an image. Sometimes we get close, other times we are far off; the farther removed the future scenario is from actual experience, the less likely it is to be in the proverbial ballpark.
Perhaps this ability, among other abstract-thinking abilities, confers an adaptive advantage. For the happy brain to make anywhere-near- accurate predictions about our environment, it helps to have access to as much past information as possible to construct multiple scenarios of what could happen. It could be that what we lack in instinctual gumption, like that of other primates, we make up for in imagination. The ability is far from perfect, but it is the most powerful organically based prediction tool yet evolved.
Excerpted with permission from “What Makes Your Brain Happy and Why You Should Do the Opposite,” available Nov. 22 from Prometheus Books.
David DiSalvo is a science, technology and culture writer for Scientific American Mind, Psychology Today, Mental Floss, Forbes.com, and other publications, and the writer behind the well-regarded science blog Neuronarrative, which reaches 100,000+ readers in more than 200 countries every month. He has also served as a consulting research analyst and communications specialist for the U.S. Environmental Protection Agency and several public and private organizations in the U.S. and abroad.More David DiSalvo.
Like little stars.
World's best pie apple. Essential for Tarte Tatin. Has five prominent ribs.
So pretty. So early. So ephemeral. Tastes like strawberry candy (slightly).
My personal fave. Ultra-crisp. Graham cracker flavor. Should be famous. Isn't.
High flavored with notes of blood orange and allspice. Very rare.
Jefferson's favorite. The best all-purpose American apple.
New Hampshire's native son has a grizzled appearance and a strangely addictive curry flavor. Very, very rare.
Makes the best hard cider in America. Soon to be famous.
Freak seedling found in an Oregon field in the '60s has pink flesh and a fragrant strawberry snap. Makes a killer rose cider.
Ben Franklin's favorite. Queen Victoria's favorite. Only apple native to NYC.
Really does taste like pineapple.