Buff up your brain

Exercise improves your health. That's a no-brainer. But do the new brain-fitness programs improve your mental health?

Topics: Mind Reader, Science,

Buff up your brain

I was watching PBS the other night when, during one of those interminable pledge breaks, I learned that with a donation I could receive a gift of the “Brain Fitness Program.” By doing a few simple mental exercises, I could improve my memory and prevent the mental ravages of aging. “Brain Fitness Program,” made by a company with the impressive name Posit Science, is one of the many new brain games that promise to sharpen our gray matter and even stave off symptoms of Alzheimer’s disease. In 2007, sales of brain-improvement games totaled $225 million.

It’s certainly true that good health practices, physical and mental exercise, and stress reduction are associated with lower rates of mental decline. But do brain fitness programs add any additional specific benefits? The optimistic answer is they might. The realistic answer is that it’s hard to know. Despite what the makers of the games claim, there isn’t a reasonably foolproof way to measure a program’s specific effects on mental performance. In other words, buyer beware.

Let’s begin with a look at “Brain Fitness Program.” Its Web site explains that as we grow older, the processing speed of our brains slows down. Thus the program is “designed to speed up auditory processing, improve working memory, and encourage the brain to produce more of the chemicals that help it remember.” Taken together, “these changes help people feel better equipped to communicate in every setting, making them more confident and more willing to engage in new experiences.”

The training consists of six exercises. One involves listening to sounds and determining whether the pitch rises or falls. Originators of the program believe the ability to process these sounds, similar in frequency to human speech, will lead to “quicker thinking, faster responses and fuller understanding.” A second exercise promises to “strengthen your brain’s ability to perceive and remember subtle differences between similar sounds that are common in English.”



Immediately these claims bring us face-to-face with a seldom-discussed problem in contemporary cognitive science — how to detect changes in mental performance. If you want to study whether a drug lowers the rate of progression of coronary artery disease, you have objective endpoints. You can compare the incidence of laboratory-documented heart attacks or heart-related deaths in a treated group and a non-treated or control group. Ditto for cancer treatment. You can assess cancer-related mortality with and without treatment. Even with the inevitable differences of design and interpretation, studies with clear-cut endpoints are the bedrock of evidence-based medicine.

But measurements of reading skills, ability to navigate mazes, memorize nonsense words, or the speed with which we learn new material — the kinds of improvements heralded by brain games — aren’t subject to the same kind of independent confirmation. Functional brain imaging can show whether areas of the brain are more or less active than a control group, but cannot accurately predict how these changes will be reflected in ordinary activities of daily life.

At present, the only way a brain fitness program can demonstrate its value is through traditional “neuropsychological testing.” But these tests are subjective. They are not an objective counterpart to EKGs, cardiac muscle enzymes, coronary angiogram and death certificate registries. Their validity is solely dependent upon the establishment of “statistical norms” against which a subject’s performance is compared.

For example, if 1,000 randomly chosen people read a paragraph in an average of 50 seconds, and you take 55 seconds, that doesn’t mean that you have a 10 percent impairment in your reading skills. The test can’t tell you the meaning of the five-second difference. Because brain-fitness games rely on the quantification of mental performances to substantiate their claims, we need to understand some of the inherent limits imposed by neuropsychological testing in general.

Neuropsychological testing is good at detecting changes in mental abilities but isn’t very good at telling us why these changes are occurring. One major limitation is their relative inability to sort out cognitive changes due to psychological factors. After an in-depth review of neuropsychological tests, the American Academy of Neurology concluded, “Anxiety, depression, psychosis, apathy, and irritability all have an impact on the patient’s ability to cooperate with testing and may directly affect cognition.” If we’re depressed or anxious, or even if we are uninterested, it will be reflected in our test scores.

The converse is also true. If we feel good about taking a test, and yes, even about the men or women administering the test, we will tend to do better. The caveats common to all mental test performances, from IQ tests to SAT scores, are intrinsic to neuropsychological tests, but with a mega-difference. IQ and SAT tests don’t suggest why we perform well or poorly. But in order to know whether a “brain therapy” is worthwhile, we need to know whether a test score improvement is a direct and real effect of the therapy.

To put this into everyday perspective, imagine asking Al Gee, a perfectly healthy 65-year-old retiree, to learn a new foreign language. Al randomly picks a computer program in beginning Greek — “It’s Greek to Me” — and studies the program for a month. His performance, including speed of recognition of individual Greek letters and comprehension of simple phrases, is measured before beginning the program, at the end of the program and periodically over the next five years.

On formal testing, Al’s speed of processing new visual symbols (Greek letters) would dramatically improve, as would his Greek-reading ability; a good percentage of this skill would persist months, even years later. None of this is surprising. Even old-timers can learn to play golf or the piano, albeit not as proficiently as when younger, and, once learned, these skills deteriorate relatively slowly over time. Their fMRI scans will show increased activity in regions related to whatever skill they acquired. This measurable improvement is to be expected; by itself, it tells us nothing about the overall “mind enhancement” of acquiring a new skill.

In addition, there may be nonspecific benefits that fall into the general category of placebo effect. If Al is thrilled at having mastered the basics of a notoriously difficult-to-learn language, he is likely to believe and report that his overall cognition has improved. He may overlook or downplay his ongoing occasional memory lapses and misplacing his car keys. Rejuvenated by his “newfound powers,” his increased self-confidence might even translate into better test performance.

In this scenario, it isn’t necessary for the program to provide any general cognitive enhancement as long as the patient believes that it does. Al’s neuropsychology report will show increased processing speed, task-specific improvement in memory and comprehension, and may well show a generalized improved performance effect.

But such improvements don’t necessarily translate into better performance in daily activities. According to Michael Marsiske, associate director for research at the Institute on Aging at the University of Florida, “All the available data on cognitive training show that when a person practices something — for example, short-term memory retrieval — the person can get better at doing that test, but that the improvement does not necessarily generalize into the real world. People may believe that they are doing some mental cross-training and that they will generally improve their cognitive efficiency. That may be true, but there is no evidence for that just yet.” Similarly, the American Academy of Neurology warns against blanket acceptance of claims of task-specific improvement correlating with better performance of daily activities.

The Greek example also points to one of the central difficulties in interpreting neuropsychological testing. To establish that improved performance isn’t just a placebo effect, we need a comparison with a suitable control group. But what would constitute a reasonable comparison? Would it be adequate to compare Al to someone studying another Greek language program? Would Russian suffice? Or Arabic? Or would you choose a completely different task, such as learning to play the accordion? Meditate? Do crossword puzzles and read a newspaper?

Even seemingly similar tasks may represent quite different brain functions. If we want to see the effect of reading the Washington Post, is it sufficient to have the control group read the New York Times? It is nearly impossible to control the articles being read. If the Post has a book review of a Jackson Pollock biography without a photo of a Pollock painting, while the Times includes one, different brain regions will be activated. A podcast of the same article will activate yet other regions. Just being familiar or unfamiliar with Pollock’s work will have different effects.

Equally challenging is the problem of minimizing the nocebo effect. “Nocebo” refers to the creation of a negative expectation that results in reducing or negating the value of a treatment. In a classic study from the University of California at San Francisco pain clinic, a group of post-surgical dental patients were warned that the pain injection they were to receive was a new drug that might have no pain-relieving effect. So when given IV morphine, the patients reported no difference from IV saline. In the same vein, telling subjects that, “Yes, I know the test is hard and not much fun, and we don’t expect major improvements,” can produce significantly different results than offering subjects full-scale encouragement and support.

The standard approach to controlling for placebo and nocebo effects is to run double-blind clinical studies. This is straightforward if the treatment can be made to look identical to the placebo control. You ask the pharmacy to make up a placebo pill of the same size, color and taste as the active pill. Only the pharmacist knows; neither the patient nor the nurse or doctor administering the pill has any idea which pill the patient is receiving.

But it isn’t easy to create a double-blind study for a computerized brain exercise program. The difference between a brain game that teaches new phonemes will be strikingly different from one that offers crossword puzzles. An inadvertent (or not so inadvertent) word or gesture of encouragement or discouragement by a research assistant or computer tech setting up the program can readily influence the test results in ways that cannot be detected.

Armed with this perspective on the limitations of neuropsychology testing, let’s return to the “Brain Fitness Program.” Posit Science states that the “Brain Fitness Program has been subjected to several large, rigorous clinical trials that demonstrate it speeds up auditory processing by 131 percent, improves memory by an average of 10 years, and more.” It touts a study called “IMPACT” (“Improvement in Memory With Plasticity-based Adaptive Cognitive Training”). The study states “that people can make statistically significant gains in memory and processing speed if they do the right kind of scientifically designed cognitive exercises … and that people who used the Posit Science program reported positive changes in their everyday lives.”

But we already knew that learning new sounds would lead to improved processing speed in the same way that Al learned to read Greek faster. And we could have predicted the high likelihood that Al would report improved overall performance. None of these already expected results tell us anything about a specific benefit of the “Brain Fitness Program,” as opposed to, say, reading “Ulysses.”

Although the IMPACT study is cited on Posit Science’s site as “proof” of the value of the program, it has not appeared in a peer-reviewed publication. So, to be fair, I examined the last published article that Posit Science cites to uphold its claims — the 2006 Memory Enhancement study published in the Proceedings of the National Academy of Sciences.

A couple of major problems are immediately apparent. The first is that the study is sponsored by and conducted by the company. This is largely unavoidable; there isn’t enough independent or government grant money to study every proposed program. A second and even more serious concern is the authors’ representation that the study was both properly controlled and double-blinded. Despite the claim of Posit Science, it’s a real stretch to believe that reading the New York Times or watching an educational DVD is a comparable brain activity to concentrating on recognizing sudden changes in pitches of a sound.

Worse, according to the study protocol, research assistants visited participants in their homes in order to properly set up the computers and explain how to use the training activity, and then made weekly visits to ensure that the participants were properly using the programs. Given the dramatic differences in assigned tasks, it requires a giant leap of faith to state that participants “received identical interaction with and coaching from research assistants.” It is difficult to accept that the assistants did not, in any fashion, communicate their observations either to the participants or to those administering the neuropsychology tests. In fact, the study doesn’t clearly specify the qualifications of the researchers who did administer these tests.

There’s no need to cite chapter and verse of how similar logistical problems plague each of the brain-fitness programs that I checked out. Studies designed, conducted and paid for by those with a vested interest can’t be construed as independent scientific evidence.

Admittedly, these problems don’t negate the possibility that brain programs could help cognition. So if you have the time, money and the desire, there’s no harm in firing one up. As the British Medical Journal stated earlier this year, “Most researchers believe that the risk of harm is low, even if the clinical benefit of brain training products is unproved.” If, however, you want to limit yourself to evidence-based treatments, don’t hold your breath. The question of whether brain games can effectively enhance your life isn’t likely to be resolved before you’re too old to care.

Robert Burton, M.D., is the former chief of neurology at Mount Zion-UCSF Hospital and the author of "On Being Certain: Believing You Are Right Even When You're Not." His column, "Mind Reader," appears regularly in Salon.

More Related Stories

Featured Slide Shows

  • Share on Twitter
  • Share on Facebook
  • 1 of 11
  • Close
  • Fullscreen
  • Thumbnails

    "Roman Candle" turns 20: Secrets of Elliott Smith's accidental masterpiece (slideshow)

    Elliott and the friends with whom he recorded in middle school in Texas (photo courtesy of Dan Pickering)

    "Roman Candle" turns 20: Secrets of Elliott Smith's accidental masterpiece (slideshow)

    Heatmiser publicity shot (L-R: Tony Lash, Brandt Peterson, Neil Gust, Elliott Smith) (photo courtesy of JJ Gonson photography)

    "Roman Candle" turns 20: Secrets of Elliott Smith's accidental masterpiece (slideshow)

    Elliott and JJ Gonson (photo courtesy of JJ Gonson photography)

    "Roman Candle" turns 20: Secrets of Elliott Smith's accidental masterpiece (slideshow)

    "Stray" 7-inch, Cavity Search Records (photo courtesy of JJ Gonson photography)

    "Roman Candle" turns 20: Secrets of Elliott Smith's accidental masterpiece (slideshow)

    Elliott's Hampshire College ID photo, 1987

    "Roman Candle" turns 20: Secrets of Elliott Smith's accidental masterpiece (slideshow)

    Elliott with "Le Domino," the guitar he used on "Roman Candle" (courtesy of JJ Gonson photography)

    "Roman Candle" turns 20: Secrets of Elliott Smith's accidental masterpiece (slideshow)

    Full "Roman Candle" record cover (courtesy of JJ Gonson photography)

    "Roman Candle" turns 20: Secrets of Elliott Smith's accidental masterpiece (slideshow)

    Elliott goofing off in Portland (courtesy of JJ Gonson photography)

    "Roman Candle" turns 20: Secrets of Elliott Smith's accidental masterpiece (slideshow)

    Heatmiser (L-R: Elliott Smith, Neil Gust, Tony Lash, Brandt Peterson)(courtesy of JJ Gonson photography)

    "Roman Candle" turns 20: Secrets of Elliott Smith's accidental masterpiece (slideshow)

    The Greenhouse Sleeve -- Cassette sleeve from Murder of Crows release, 1988, with first appearance of Condor Avenue (photo courtesy of Glynnis Fawkes)

  • Recent Slide Shows

Comments

0 Comments

Comment Preview

Your name will appear as username ( settings | log out )

You may use these HTML tags and attributes: <a href=""> <b> <em> <strong> <i> <blockquote>