Like little stars.
Where there are people, there are lies. The theory of Machiavellian intelligence claims that our capacity to deceive was developed by virtue of our distant ancestors’ way of life and refined as their primate brains grew and developed more complex structures. Our closest relatives indicate that, from an evolutionary point of view, it has to do with the youngest part of the brain, that outer layer of coiling tissue called the neocortex, which takes up nearly eighty percent of human brain volume. The Scottish primatologist Richard Byrne and his partner Nadia Corp of the University of St. Andrews have explored the brains and behaviors of eighteen species of primates, and they found a striking connection. The larger the animal’s neocortex, the better they were at deceiving their fellow primates in everyday situations.
Homo sapiens lies all the time. As individuals, we discover the nature of the lie at around the age of three or four and, from then on, it is a natural companion without which only very few can imagine living. You can’t really conceive of a modern, well-functioning society without the lie.
Finding a sure-fire method for revealing lies and deceit is, of course, an age-old dream. And all cultures have had their own traditions and folklore on how to identify the perpetrators. The polygraph, which is an American invention dating back to 1913, is not used much in Europe. In many countries, it is not deemed to provide reliable evidence. In the US, on the other hand, it is used frequently by defense attorneys, prosecutors and police and, according to a Supreme Court decision, it is up to the individual judge whether polygraph data may be used as evidence in a case. The lie detector, however, is under intense attack as ineffectual. There are even organizations – for example AntiPolygraph.org – that are campaigning to scrap the apparatus altogether because of its lack of scientific grounding.
The psychologist Paul Ekman, who is now professor emeritus at the University of California in San Francisco, spent most of his career gaining expertise on how lying is reflected in facial expressions. He developed a Facial Action Coding System, which categorizes thousands of nuanced facial expressions that can be created by combinations of forty-three independent facial muscles.
But both the polygraph and reading faces are external solutions. With the new scanning methods, there has come a hope of getting beyond these indirect measures to the source of the lie itself, namely, the brain. And it must be said that the movement has been surprisingly vigorous. The first feeble experiments were done around the turn of the millennium by psychiatrist Daniel Langleben, who was then affiliated with Stanford University, in the process of studying how certain types of drugs affected the brains of hyperactive children. By chance, during his work, he ran into the theory that one side effect of their disorder was that these children had difficulty lying. He was sure that the specific kids he knew could lie perfectly well. However, they might sometimes have trouble keeping the truth under wraps, creating problems for themselves in social contexts. Langleben got the idea of looking more closely at the lie’s various stages of development.
His theory was that, in order to tell a lie, we have to undertake several independent mental operations. On one hand, the brain has to prevent the truth from slipping out and, on the other, it has to construct the lie itself and serve it to the world in place of the truth. Langleben believed that you had to be able to observe this dual book-keeping in a brain scanning as activity in various circuits. The lie, in other words, had to leave a physiological trace behind.
To test the idea, he didn’t call in hyperactive children but ordinary university students, whom he instructed to lie about a particular playing card. They were given the five of clubs in an envelope and then went into an MRI scanner, where they were supposed to push a button “yes” or “no” to indicate a match, in response to a series of playing cards displayed, one by one, on a screen. The inducement was that they would win twenty dollars, if they lied so convincingly that the machine couldn’t catch them. But as demonstrated in the article that was later published, the students weren’t very good at it. Even their innocent lie about a playing card left a clear imprint.
What immediately stood out for the researchers was that the lie showed increased activity in the whole prefrontal cortex; an indication that there was more thinking activity and cognitive work to lying than telling the truth. There were also special regions that stood out. The researchers put particular emphasis on the anterior cingulate cortex, whose function is still being debated but presumably plays a role when we deal with conflicting information. At any rate, it becomes highly active in the classic Stroop test, where people are presented with a series of words that describe one color but are printed in another. When the research subject is asked to say what color the ink is, they often choke and say instead the word that’s written and, while they do it, their anterior cingulate cortex rings the alarm bell.
Daniel Langleben was hooked, and when he moved to the University of Pennsylvania, he continued his work with modified lie scenarios. In his second experiment, the participants could actively choose whether they wanted to lie to the nice researchers. At a general level, the results were comparable – parts of the brain revealed that lying took place – but it was less clear whether you could talk about a definitive mapping of the lie’s anatomy.
In the meantime researchers in Great Britain and Hong Kong produced their own independent studies of the phenomenon, and they all showed a clear difference between a lie and the truth in the individual research subject. But the experiments themselves were slightly different in their conceptualization, and there were likewise differences in exactly which areas of the brain were activated during the exercises.
At Harvard, psychologist Stephen Kosslyn began thinking about the matter, and he concluded that the provisional hypothesis was far too simple to describe reality. Lies and deceit can take infinitely many forms, and it does not necessarily seem obvious that the brain should treat them all in the same way. It feels different, for example, to come up with something on the spot as opposed to delivering a carefully considered falsehood.
And this is where Kosslyn struck. He wanted to directly compare the results from experiments with spontaneous lies à la Langleben and rehearsed stories that his research subjects had plenty of time to learn. They came to the experiment with their own account of a vacation about which Kosslyn asked them to change certain points. He suggested, perhaps, having the vacation take place somewhere else or coming up with a fictive companion. After a few hours of repetition, all twenty voluntary liars were put into the scanner, where they answered questions about their vacation experiences.
Totally in keeping with Kosslyn’s suspicions, there were differences between the two types of lie. For one thing, it was clear that the rehearsed lie did not involve the anterior cingulate cortex as much as a spontaneous deception. At the same time, there were striking differences in how memory resources were involved.
Seen through Stephen Kosslyn’s prism, researchers have only scratched the outermost frosting on the cake of deceit, and he has argued that an understanding of the phenomenon requires far more intensive research, attacking the problem from many angles. He says that to gain genuine insight into the mechanics of the lie will require us to delve deeper into fundamental phenomena such as memory and sensation.
However, while Kosslyn was thinking in complex terms, the growing interest in lies was no longer merely academic. At the US Defense Department, the field had been acutely deprioritized, almost hidden away, for years, but then the agency had its eyes opened in the wake of the catastrophe of 9/11. Now it was suddenly of vital importance to be able to evaluate the credibility of sources and statements, but unfortunately they lacked the equipment for it. The Department of Homeland Security and the Defense Advanced Research Projects Agency (Darpa) have followed suit and loosened their purse strings for research on lying.
Darpa, for example, has given support to Daniel Langleben, who, after his first studies, quickly decided to get rid of the academic hair-splitting and get down to developing a functional lie detector. With the experiences of the polygraph in mind, one of his goals was to create a format that was free of subjective interpretation. A data processor that is, so to speak, untouched by human hands. He realized this by developing a set of algorithms that could determine when the person in the scanner was lying or telling the truth. Andrew Kozel of the Medical University of South Carolina came up with the same idea and in 2003 published his own algorithms that could distinguish lies from truth in controlled experiments. Both instances involved computer programs that could analyze the data on brain activity from MRI scanners without human mediation and point out when the activity indicated a lie.
According to the DoDPI, there are currently about fifty laboratories in the US alone working in one way or another on understanding and detecting lies – using not only fMRI technology but different forms of EEG, for example. Recently, psychologist Jennifer Vendemia at the University of South Carolina tested – with five million dollars from the DoDPI – almost seven hundred students with EEG measurements. She put them in a hood containing 128 electrodes placed to cover their scalp, and registered the electrical charges that came in the form of diverse brain waves. What interests Vendemia about the lie are so-called event-related potentials, ERPs, which are brain waves unleashed by particular stimuli.
If, for example, you show a person something visual, that person will, after 300–400 milliseconds, register a nice ERP, as an expression that “something” is happening in the brain such as a thought or increased attention. The EEG, unlike scanning, does not provide good spatial information, but it is much better at providing information about time. A functional MRI scanning only provides a picture every other second, while the electrodes on the scalp can register changes down to a thousandth of a second.
And it is in these time differences that Jennifer Vendemia’s results may be found. In her experiments, she typically presents the research subject with some short statements that are either true or false and coded in two colors. Every time the statement is red, the person is to answer “true,” while the required answer is “false,” when the little sentence is printed in blue. However, the cards are set up in a way so that both red and blue statements can either be true or false. The research subjects answer as they should, but their ERP pattern reveals that they are stating a lie. If you get a red statement “A snake has thirteen legs” and answer “true,” it takes 200 milliseconds longer to answer, and the ERP signal is stronger in regions in the middle and top part of the head. Roughly speaking, the MRI experiments also point toward some of these areas.
Jennifer Vendemia doesn’t think her method can be hoodwinked by good liars. At any rate, she has looked at the extent to which practice can change the extra reaction time and found that even trained liars have exactly the same ERP pattern as pure novices. Her most interesting claim, though, is that her measurements can predict a lie before the liar has decided on it. Thus, she sees the first changes in the person’s EEG about 250 milliseconds after the statement appears on the computer screen, while it takes between 400 and 600 milliseconds before the pattern showing a decision appears.
There is a third player up in Seattle, Lawrence Farwell, whose Brain Fingerprinting Laboratories is marketing his own Brain Fingerprinting method. Like Vendemia, it is an EEG technology, but Farwell uses an electrode-studded headband instead of a hood, and he concentrates on the so-called p300 brain wave, which is a part of the overall ERP pattern. What he is testing is the extent to which a person recognizes a given stimulus. It can be anything from a telephone number to a picture of a decrepit summer house in the country. And the principle is that something you have seen before unleashes a characteristic electrical response between 300 and 800 milliseconds after it is presented. In EEG readings, you can see the p300 wave as a clear peak on a curve with smaller waves and, with his own patented algorithm, Farwell believes he can detect a lie with nothing less than one hundred percent certainty. In his sales materials, he states that he has tested the technique on two hundred research subjects in projects financed by the CIA and the FBI; but, in published articles, it is down to six research subjects.
Farwell and his headband have appeared on all the big TV stations in the US, but Brain Fingerprinting has also had its debut in the courtroom. In 2000, a District Court in Iowa conducted a hearing to determine whether Terry Harrington, who had been convicted of murder in 1978, could have his case reopened. Hired by the defense attorney, Farwell tested Harrington by showing him pictures of the murder site, and he testified that the convicted man had never seen it before, according to his p300 results. Then the only witness in the case admitted to having lied about seeing Harrington at the murder site, and the judgment was ultimately reversed. In connection with the hearing, there was an eight-hour long discussion about the extent to which Brain Fingerprinting could be admitted in court and, in 2001,the judge determined that the test lived up to the legal standards for scientific evidence.
Some also believe there are signs that MRI technology is not far from being admissible. At any rate, in 2005 the US Supreme Court determined against the execution of minors, a decision partially grounded on MRI studies that show that, in many regions, the brains of young people do not function like hardened adult brains.
Yet, even as MRI-driven lying research has been transformed into product development and marketing in record time, there are a few worriers on the sidelines shouting concerns. Sociologist Paul Root Wolpe, a professor of bioethics at Emory University, imagines a violent counter-reaction from the general public. Wolpe believes that many will quite simply see the technology as a piece of creepy science fiction reminiscent of mind-reading and surveillance à la Orwell’s “1984.” He speaks indefatigably for an open, popular debate on the subject.
The American Civil Liberties Union, the ACLU, agrees. They are particularly nervous that a new, sexy technology to detect lies will be misused under the aegis of the war on terror. In the spring of 2006, the ACLU held a symposium at Stanford University, at which selected researchers, philosophers and other observers gave their interpretation of the matter, and subsequently they asked for access to records from the American government. They wanted to give the public an insight into how huge the funding is for research into MRI and other lie detector technologies.
Even in Europe, we’ve heard an echo of this discussion. In 2005, a few hundred invited citizens and neuroscientists met at the Meeting of Minds conference in Brussels, where they discussed the future significance of brain research and knowledge about the brain. One of the problems discussed was the possibility of “mind-reading.” Specifically, Brain Fingerprinting was discussed, and several prominent scientists expressed fundamental ethical concerns about the potential for this type of technology to invade our inner space. When you open up the brain’s processes in this way, you violate, in a heavy-handed way, the individual’s right to keep his or her thoughts and feelings private.
Lone Frank is an award-winning journalist, science writer, and TV presenter. She holds a Ph.D. in neurobiology, and has worked as a research scientist in Denmark and the U.S.
Excerpted from “The Neurotourist: Postcards From the Edge of Brain Science” by Lone Frank, published by Oneworld. All rights reserved.
Like little stars.
World's best pie apple. Essential for Tarte Tatin. Has five prominent ribs.
So pretty. So early. So ephemeral. Tastes like strawberry candy (slightly).
My personal fave. Ultra-crisp. Graham cracker flavor. Should be famous. Isn't.
High flavored with notes of blood orange and allspice. Very rare.
Jefferson's favorite. The best all-purpose American apple.
New Hampshire's native son has a grizzled appearance and a strangely addictive curry flavor. Very, very rare.
Makes the best hard cider in America. Soon to be famous.
Freak seedling found in an Oregon field in the '60s has pink flesh and a fragrant strawberry snap. Makes a killer rose cider.
Ben Franklin's favorite. Queen Victoria's favorite. Only apple native to NYC.
Really does taste like pineapple.