Mind reading is possible!

Advances in neuroimaging suggest telepathy could be on the horizon. It's time to consider how we'd use it

Published December 15, 2012 11:00PM (EST)

  (<a href='http://www.shutterstock.com/gallery-160669p1.html'>ollyy</a> via <a href='http://www.shutterstock.com/'>Shutterstock</a>)
(ollyy via Shutterstock)

Excerpted from "The Brain Supremacy: Notes From the Frontiers of Neuroscience"

"If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place." Eric Schmidt, CEO of Google

Three motives drive neuroscience research: the clinician’s urge to heal, the analyst’s urge to understand, and the engineer’s urge to improve. Understanding and repairing the brain have always gone along with wanting to improve it, and proponents of human enhancement have eagerly anticipated the brain supremacy. Could brain techniques like neuroimaging be used to extend or transcend natural human capacities, for instance by allowing us more direct access to other minds? Could learning, problem-solving, and social interactions be transformed?

Most of us are already skilled mind-readers, using facial expression, tone of voice, body language, and our own experience to infer what the people we interact with are thinking and feeling. Yet these markers are proxies of our inner states, "accessories accepted in lieu of the internal character," as Charles Dickens called them. As victims of con artists learn to their dismay, our beliefs about other minds are sometimes incorrect. Neuroimaging offers the hope that we could bypass the need to infer mental content from external cues. This is the superpower of practical telepathy: detecting and decoding minds at source.

Back in my graduate days, I remember hearing functional MRI dismissed as "brain geography," prettily descriptive but doing little for real understanding. Since then PET (positron emission tomography), fMRI, and their descendants have become an immensely fruitful set of research tools, and the literature they create has burgeoned. Journal articles reporting on fMRI studies now cover everything from sensory differences to psychological biases, courage to empathy, reward processing to print processing — and more. Brain-imaging techniques have proved their inventive worth.

In the past year or so, some truly remarkable claims have been made for neuroimaging. Here are some examples:

By using [ ... ] functional MRI, we decoded activity across the population of neurons in the human medial temporal lobe while participants navigated in a virtual reality environment. Remarkably, we could accurately predict the position of an individual within this environment solely from the pattern of activity in his hippocampus.

Traces of individual rich episodic memories are detectable and distinguishable solely from the pattern of fMRI BOLD signals across voxels in the human hippocampus [voxels are 3D pixels, the units of the grid into which brain scans are segmented for analysis].

This article [ ... ] demonstrates how a resulting theory of noun representation can be used to identify simple thoughts through their fMRI patterns.

These [ ... ] models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer. [ ... ] Our results suggest that it may soon be possible to reconstruct a picture of a person’s visual experience from measurements of brain activity alone.

So scientists can already do a form of DNE recording. It seems that they can decode where you are and what you’re looking at, what memory you’re reliving, and even what you’re thinking. Has the brain supremacy achieved so much already? As we shall see, it's always more complicated than that. At present these startling claims are strictly limited because it really is more complicated than that. However, there seems no reason why they may not be brought to apply more generally in the the brain supremacy very near future. What then might be the consequences of such a technology?

Practical telepathy

At first glance, a world in which mind-reading became available not just to researchers or governments but to anyone who wanted it may look like a place full of promise. Lovers could know, at last, if their partners truly cared for them. Friends could detect betrayals before they happened. Banks, the police, and governments could catch more fraudsters. Psychiatry, counseling, welfare, and the criminal justice system could be transformed. Lies and cheating would fall out of favor, at least in face-to-face relationships, and honesty would find itself fashionable.

Practical telepathy would force us to be more open with ourselves and others. Like the CEO of Google quoted at the start of this chapter, many people link openness with virtue. If this is the case, opening minds to other people’s scrutiny should result in general moral improvement. Imagine government officials, sales personnel, the media, and leaders everywhere being put under pressure to say only what they actually believed. Imagine the impact on consumerism and employment, family and friends, if everyone had access to portable, perhaps even concealable, brain-scanning technology.

The consequences of enhancing human capacities to detect mental activity vary depending on what you are detecting and how. There is a difference between current overt neuroimaging techniques and potential covert technologies. The latter’s availability will depend on whether sciences that are as yet undeveloped, such as nanotechnology and room-temperature superconductivity, can make brain monitoring equipment sufficiently cheap and portable. If nanomachines, perhaps in the form of proteins encoded in synthetic genes, could be designed to emit a signal when certain neurotransmitter molecules were released in certain brain areas, and if the artificial DNA could be administered in food, drink, or as an aerosol, a person thus infected might never know that their privacy had been lost. Until that is possible, computational power and statistical analysis will be used to increase the sensitivity of monitoring technology. The ability to record electromagnetic "brain waves" at a greater distance from the skull, with less interference from other electromagnetic radiation, would be a considerable asset, for example.

On whom will these technologies be used, and when? Enemy soldiers? Suspect or criminal individuals? Celebrities or holders of public office? Anyone of interest to the media, or the government? If electromagnetic fields are being recorded, is there anything to prevent such recordings being made of more than one brain at a time? Perhaps methods could be developed to monitor groups — looking for signals as to whether a crowd of demonstrators is likely to turn violent, for example — or even entire populations, finally putting electromagnetic flesh, however crudely, on that most elusive of notions: public opinion.

Another important distinction is between techniques presenting their results in real time and those using later, off-line data analysis. In either case, will the information flow one way from participant to researcher, or will it be fed back to the brain that sourced it -- as a method of clinical treatment, for example? How the results are presented, who gains access to the data, and how much training the recipients will need to understand them also need to be considered, as with any research study.

What exactly would a mind-reader read?

Overt or covert technology, offering immediate or delayed results and targeting individuals or groups: The possibilities are already remarkable. There is, however, a further question: What aspect of brain function will be measured?

One form of mind-reading could involve detecting the contents of a person’s consciousness. Here the potential benefits for human creativity are immense. I personally long for a system which could translate my sometimes vivid dreams directly into pictures and videos, since my the brain supremacy drawing skills are abysmal. Skilled artists too would surely enjoy the ability to transfer their mental images directly to a screen; likewise for composers, film directors, novelists, web designers, programmers and other creators. So to any scientists working on brain downloading, please hurry up.

These techniques could be used in many domains. Entertainment, education, medicine and psychiatry, and criminal justice are only the more obvious possibilities. We may be able, one day, to make our own DNE records, to share or program our dreams, to learn new skills direct from the minds of experts, or to communicate with loved ones purely by thinking. If the technology can be miniaturized and the computing power made available, real-time recording of brain function could become a routine aspect of everyday life, perhaps even continuously so. Thus applied, it could prove an unparalleled aid to diagnosis, or even prevention, of mental distress. It could change definitions of what counts as unacceptable mental activity, allowing individuals to be treated for thoughts, fantasies or memories they — or others — find disturbing, even when a doctor would say that there was no clinical problem. And it could solve one of the biggest problems in medicine by establishing a baseline for normal function against which the clinical symptoms could be compared.

Intention reading

The concept of reading intentions is of very great interest in criminal justice and forensic psychiatry. Thoughts alone are insufficient here. If Edmund sits quietly at his desk dreaming about how an axe through the skull would improve a tiresome colleague, he is doing no more actual harm than a worker who spends company time on Facebook. However rapt his fantasies may be, they do not hurt anyone as long as he keeps them to himself and doesn’t either mention or perform the fatal craniotomy. There may come a time when George Orwell’s thought crime is seriously proposed as legislation, but for now a man’s imagination is still his own backyard. If intention-reading technology became available, it would have to be able to tell the difference between violent fantasy and the moment when Edmund snaps and looks around for the nearest sharp implement.

Identifying the urge to commit a dangerous action before the action takes place is not as easy as it may sound. In monkeys, scientists can already detect intentions for simple movements like gaze-shifting, where the direction in which the eyes are going to move can be inferred from the activity of neurons in specific regions of cortex. Researchers have also successfully suppressed aggression in male mice, using optogenetics to stimulate part of the hypothalamus. Monkeys and mice, of course, are not human beings, nor is moving your eyes the same as beating someone up. The process of teasing out the neural pathways underlying human violent behavior is as yet incomplete. Nonetheless, these studies are intriguing hints of what may be possible in the not too distant future.

If detecting violent intentions could be done, especially if it were coupled with mechanisms for preventing such behavior, it could render prisons virtually redundant, replacing them with clinics where anyone identified as an offender is fitted with the monitoring technology. However, such methods are likely to be used, before concerns about their efficacy and ethics have been thoroughly ironed out, by governments struggling with the problems of predicting violence, dealing with addiction, and minimizing antisocial behavior. Thinking about them well in advance is therefore worthwhile. Since, in practice, any such system will probably begin as a tool for controlling violent killers, respect for their human rights may well be minimal; yet what starts with managing a murderer may spread to anyone judged to be habitually violent, and then to the only potentially violent. We should be wary of establishing the principle that anyone, even a criminal, should be banned from intending violence as opposed to actually committing it. The idea that if you have something that you don’t want anyone to know, then maybe you shouldn’t be thinking it in the first place, is something not even the lords of cyberspace have yet suggested.

Researchers have already noted the ethical conundrum posed by being able to predict undesirable outcomes, like violence, partially but not absolutely. Various factors are known to correlate with a greater likelihood of violent behavior. Some are social (e.g., gang membership, participating in a war, living in a culture which heavily emphasizes honor, living in a dangerous neighbouhood). Some are personal (e.g., a history of violence, childhood physical abuse, lack of early supervision), and some are bio-markers (e.g., being male, being young, perhaps having certain genes or physiological traits). Unfortunately, knowing that your next-door neighbor is a shady character with a troubled background and a savage temper does not give you the means to predict his next explosion.

This inability to apply predictions to individuals is a general feature of scientific explanations, especially in the behavioral sciences, because they depend on statistical analyses of how groups of people — sometimes very small groups — behave under certain more-or-less realistic conditions. Analyzing collective behavior deliberately glosses over the personal idiosyncrasies that make individual actions so difficult to predict. Scientific theories and hypotheses in brain research are thus framed in statistical terms about groups, not persons. They express probabilities rather than certainties, and generalities rather than specifics.

Because it would be unethical to deliberately induce violence in a community, or an individual, for research purposes, many studies of harmful behavior also express correlations rather than causal links. Saying that people with more risk factors have a higher probability of committing violent acts means that if you took a large sample of people with risk factors and another sample of people without, you would likely find that the high-risk group was more violent. It does not mean that everyone who has the risk factors will be violent because correlational studies do not tell us that having risk factors causes a person to be violent.

Intention-reading technology, however, would be a step beyond risk factor research. One need not say, "This person has the kind of profile that violent people have, so let’s lock them up/tag them/monitor them just in case," thereby risking the injustice of locking up an innocent citizen. Instead, neuroimaging would be used to identify the neural patterns activated when a person is just about to commit a violent act, and it would be combined with monitoring of the environment to assess whether it was safe for them to do so. Of course, the person might then exercise remarkable self-control ... but if he or she didn’t, a system to detect the outgoing motor command and intervene in some preventative fashion is not beyond the wit of scientists. As noted earlier, ethical concerns would remain with such a system, but realistically, progress in ethics is like progress in science: one step at a time, only slower.

Emotion reading

Another possibility is that future techniques will be able to detect moods, emotions, desires, and dislikes more accurately than can skilled human perceivers. Since the capacity to assess other people’s feelings is extremely useful and widely variable, the benefits of this kind of enhancement could be considerable; in principle, it could bring all of us up to the standard of highly empathic, emotionally literate people. Work is already under way on multiple techniques to improve emotional understanding for people who are deficient in it because they have autism. Some are chemical (e.g., using the hormone oxytocin, applied as a nasal spray), but neuroimaging is also playing a part. For example, fMRI is being used to detect differences in brain activity in autistic people.

Finding a robust and repeatable physical difference, a "bio-marker," is the first step towards achieving the analytic goal of understanding why autism involves such devastating problems with social interactions. Eventually, the hope is that researchers can devise a treatment to achieve the clinical goal of normal function — and perhaps, thereafter, the enhancement goal of making us all more adept at reading each others’ hearts and minds.

Greater access to emotional states, in the sense of more accurate detection, would not necessarily imply more empathic togetherness. Empathy appears to be dependent on contextual features and on whether or not the person’s cognitive resources are already drained or distracted. One important aspect of the context is similarity: Empathy for other people’s emotions, and their pain, is more likely to be evoked by people like us. If a person sees a friend or partner in pain, they will probably try to help relieve the pain, and they may feel the pain themselves to some extent — it works better in women, apparently. If, however, they judge that the pain is deserved punishment, because for example the sufferer previously acted unfairly, empathy can be reduced — at least in men. If the sufferer is classed as an enemy, empathy may also be lessened; in some cases, the observed suffering may even become rewarding. Then there are the cases where empathy leads to so much distress in the empathizer that they can’t bear the pain and react by retreating, denying the suffering, or feeling active hostility to the sufferer who is unwittingly hurting them. Better recognition of other people’s feelings through technology, therefore, will not automatically produce better ways of dealing with them.

Furthermore, similarity is not a yes/no distinction but a complex gradient between "like" and "unlike." How similar to myself I judge you to be depends on what aspects of your appearance, behavior, and personality I happen to value or notice as I make the judgement. That in turn can be affected by what else is going on in my environment. If, for example, I express my delight in classical music, and you adore Mozart, then you may feel we’re more similar than my obvious revulsion at your political opinions might have led you to believe. Empathy between people can change extremely rapidly depending on the circumstances. The emotional contagion through which we pick up another person’s moods via subtle changes in body language, prosody, facial expression, and so on can also be very fast, and these changes are often largely subconscious. Using neuroimaging technology to, in effect, bring them to consciousness might assist people to regulate their own responses.

There is, however, a danger: Too much information might lead to overload, stressing people into reverting to stereotyped behaviors. I have argued previously that the brain can be seen as an effort-minimization device, with conscious perception serving as a marker of effort. This is why learning a skill is initially very much a conscious activity, with awareness diminishing as the skill becomes habitual. Conscious processing of information from neuroimaging technology is likely, therefore, to be far more effortful than the brain’s usual social processing, rendered habitual by many years’ experience, which typically occurs below the threshold of consciousness. Compared with what brains achieve, as a matter of course, during a simple social interaction, our conscious processing capacities are woefully restricted. Adding to their burdens will have to be carefully done.

Perhaps the prospect of monitoring other people’s emotions in real time is too ambitious. Apart from anything else, not every human being is interested in other human beings’ feelings. Surely a major motivation for pursuing wealth and status is the desire to escape the bondage of having to care about what other people feel. Of those among us who are interested in emotions, some are altruistic, but many have instrumental motives: marketing, political leverage, or other forms of manipulation. Is it wise to provide them with yet another tool?

Thought-reading

This brings us back to the traditional form of practical telepathy: as "silent speech" or thought-reading. Here again the implications of making such powers available are almost unimaginable. Politics, for example, could be transformed, with voting performed via mentally activated computers and candidates assessed on the basis of the visceral responses they inspire in voter focus groups. Advertising and marketing are already looking to neuroscience; think what they could gain from these techniques. Diplomacy would have to change; so would government, the media, and even science, itself. Indeed, it is difficult to think of any area of society that would not be affected should this child of the brain supremacy be born.

Classic science fiction portrayals of telepathy tend to regard it as a gift (though it may be a curse, as well). It is often a marker of superiority and/or the next evolutionary step awaiting human beings: One thinks of the many instances in Star Trek, the "group minds" of telepathic children in John Wyndham’s "The Midwich Cuckoos" and "The Chrysalids," and so on. These stories suggest that, as with many powers, mindreading is dangerous when unequally distributed, but can also be a positive force for social harmony. If practical telepathy of this kind does become available, therefore, much will depend on who gets it and when.

Devilry lurks as ever in the details, which are so smoothly passed over when merely uttering the word "telepathy." Imagine a device — portable or perhaps implanted — that can deliver real-time thought streams: DNE data extracted from other brains, smoothed and remapped onto your cortex. At last, the gift to see ourselves as others really see us. (Be careful what you wish for.) But how will it work? Surely reception and transmission would not be switched on by default — imagine the noise — so we can imagine a focused system with settings appropriate for the circumstances. A "lecturer" setting, offering one-to-many broadcasting, could transform teaching, politics, and the media, for instance. Requiring consent to "sync" with someone else and pick up their transmissions would be the equivalent of opting in to data-sharing — and no doubt as easy for governments to override when, for example, chasing a suspected terrorist. Search technologies would allow the system to tune into certain DNE patterns and ignore others, allowing automated analysis to scan the population for "dangerous" thoughts.

Selecting your choice of partner would be crucial. Enticing as the thought of spying on other people’s mental lives may be, there are few Prousts out there whose cranial worlds would be worth raiding. If my head, and the blogosphere, are anything to go by, most of the neural chatter would be inane. Ow-it-hurts, yum-chocolate, must-wash-up, stop-it-do-some-work: We’d need some mechanism to filter out the junk from our transmissions. Who knows, the result might be a gigantic mental clean-up and admirably better internal self-regulation. A side effect might be that spoken language becomes associated with lower financial and educational status, as is already happening for Internet abstinence. Speech and its support systems might even eventually atrophy from lack of use. Another unintended consequence might be that people withdraw still further from face-to-face interaction — where they risk being scanned — in favor of safer, more controllable, virtual connectivity.

The quagmire of ethics

Mind-reading makes the ethical issues already raised by recent developments in social media -- such as tailoring adverts to a person’s profile and location -- seem minor, especially if it becomes possible to apply the technology covertly. Yet it raises many of the same concerns, so we can regard public reaction to social media as a trial run for more distant products of the brain supremacy. I have already mentioned a major anxiety: mental privacy, given the many gaps between thought and behavior. This is especially problematic when the technology intersects with power differentials in our unequal society. The powerful are likely to have more access, earlier, both to mind-reading and mind-protecting technologies.

Another concern is to do with control and ownership. Who would own the DNE data gathered by mind-reading technologies? Who could exploit it for gain? If you took a photograph of a person in the street, you might view that photograph as yours, but would that be equally true if you took a brain scan? What if your government scanned you, either without consent or with consent gained by some form of pressure, like making a scan mandatory for certain jobs, benefits, or tax concessions? Would you be happy for that information to be held at all, given governments’ lamentable history of incompetence when it comes to data security? Would you be happy for it to be passed to all sorts of third parties in the name of greater efficiency? Or would you want the ability to opt out and delete the data?

A third concern is mission creep. Government allows the invasion of its citizens’ privacy for specific reasons, like suspected criminality. Mind-reading scans, however, might well be vulnerable to reanalysis for reasons never used to justify the original study. Some kinds of scans might also provide information irrelevant to the purpose of the scan but hugely important to the individual scanned, such as the discovery of a brain tumor. This could be extremely damaging for individuals if a scan taken for one purpose (e.g., to vet a candidate, by an employer) was then reanalyzed for another (e.g., to look for disease, by an insurance agent). Clinical neuroimaging technologies have procedures in place for this eventuality, but if mind-reading is to become available to people beyond the current specialized user base, we need to think carefully about who has access and what training, if any, they receive.

As brain scanning technologies become able to detect not just blatant disease but more subtle changes, the ethical problems they carry become more acute. Some are familiar from other contexts, like genetics: What if a scan shows the first small signs of an incurable neurodegenerative disorder? Some, however, are peculiar to the brain and come down to the emphasis we humans place on certain aspects of brain function — the ones we call beliefs and desires. Here’s an example: Imagine you’ve applied for a job as a schoolteacher. You reluctantly agreed to the routine brain scan and are horrified to be told that the machine detected the presence of inappropriate thoughts about children. Not only do you fail to get the job, you risk being stigmatized, losing access to your own family, and being forcibly detained for "rehabilitation." The problem? You were so nervous that you found yourself wondering if you could ever have felt a sexual urge towards a child. Anxiously reviewing your past encounters with children, you involuntarily remembered an uncomfortable teenage experience of sex. The machine correctly detected anxiety, thoughts of sex, and memories of being with children, but the interpretation was dangerously wrong.

Pedophilia, most people agree, is an evil, and its status is reflected in law. When it comes to those beliefs and desires disliked by many but not (yet) made illegal, the possibilities evoked by practical telepathy start to look very worrying indeed. If, kept awake yet again by my noisy neighbous, I dream of them dropping abruptly and quietly dead, I don’t want that wicked thought made public, with names and dates attached. Especially not if it earns me an antisocial thought order, or whatever equivalent future governments use to crush their less-than-perfect citizens into shape.

People whose sex lives include unconventional — but entirely theoretical — components may likewise want to keep their fantasies to themselves. So may anyone whose criticisms of those in power, if openly stated, might cause them problems. The gap between thought and action allows space for human agency: self-control, the understanding that fantasy and reality are distinct, and the acceptance, essential to maturity, that not all desires can or should be gratified. Remove that gap, and one consequence will be that human beings become more infantilized, less able to control their own behaviour, and more tolerant of external controls like social pressure and state power.

Any form of social control, once applied, is far easier to extend than to roll back. Society, talk of free speech notwithstanding, is already extremely conformist. I’ve lost track of the number of times I’ve come across someone saying, "It may be true, but you just can’t say things like that!" Thought-reading, potentially so good for social openness, could be catastrophic for personal liberty. Research scientists may be currently barred by ethical constraints from doing the kinds of studies which would directly threaten that liberty, but ethical climates change — as we are already seeing with ideas about privacy since the arrival of social networking. Even if research restrictions are maintained, streams of progress which find their way blocked by ethics are apt to be diverted into other channels, such as those offered by military research or some private enterprise, where the moral constraints are looser. If ever there were a "dual-use" technology, offering both benefits and dangers, mind-reading is surely it.

These examples involve potential harms to individuals. There are other cases, however, which do not cause obvious harm but which may nonetheless make us feel uncomfortable about the benefits of ultimate openness. Here is an actual instance from a conference where neuroimaging results were presented prior to publication. The fMRI experiment involved showing religious and non-religious people pictures of women. The "experimental" picture had religious meaning; the "control" picture looked similar but had artistic rather than religious value. The results were as expected apart from one religious gentleman, whose brain had responded more intensely to the control image. When the researchers inquired, he confessed that he had found the lady in the picture rather attractive. So they slipped that information into the presentation. Cue amusement from presenter and audience.

The data were anonymous, and the participant who gave up his time, unpaid, for science is most unlikely ever to know he’s been laughed at, so where’s the harm? Again, we have a pre-existing analogy: Those noble people who donate their bodies for medical research will never know if students make rude remarks about their corpses. (In the past, trainee medics have done a lot worse than that, but I’m assuming prank control is stricter these days.) Since no harm is done, does it matter if the students, or the researchers, are less than respectful of their volunteers? Or is harm not the only consideration here? My instinctive reaction was that the laughter wronged that unknown man and demeaned the gigglers, though they caused no harm. What do you think?

Do we need privacy?

We are, for now, still private people. To take evolutionary psychology seriously implies that having a private self was either advantageous or, at the very least, not problematic for our ancestors. Why might that be? The standard model proposes that limited resources — food, shelter, good-quality mates, etc. — force organisms and the genes they carry to compete in what Charles Darwin called the struggle for existence.

The struggle for existence inevitably follows from the high geometrical ratio of increase which is common to all organic beings. [...] More individuals are born than can possibly survive. A grain in the balance will determine which individual shall live and which shall die — which variety or species shall increase in number, and which shall decrease or finally become extinct.

To survive in a changeable world for long enough to reproduce, it is a great help to be able to predict at least some of the changes. In social species like ours, many of the most important and potentially dangerous variables are other individuals, especially competitors. Skill in understanding why they act as they do and in predicting what they will do next remains an advantage; people with autism, who seem to have difficulty with this, often struggle to function well in society. For our ancestors, even a slightly better-than-average gift for second-guessing others may have been enough of a grain in the balance to tip our species toward a trajectory where theory-of-mind skills were favored by selection.

Developing better prediction, however, is only one side of the evolutionary arms race, because if your rivals can predict your behavior as well as you can theirs, where’s the advantage? That sets up another selection pressure: Less-predictable individuals may be better, over time, at exploiting resources. In a social species, however, trust between members of the same group is so crucial that behavioral extremes are necessarily constrained. A little mystery may procure the impression of charisma — a useful asset — but excessive unpredictability makes you seem unreliable, mentally disturbed, and possibly dangerous. That reputation may get you kicked out of the group, with catastrophic results for you and your genes.

Being able to keep some beliefs and desires hidden, however, allows you to exploit resources without necessarily telling the group about them: to cheat and take a free ride, now and again, when you feel you won’t get caught. It also gives you a social currency: By strategically revealing hidden parts of your self, and reciprocating when others do so, you can build trust. These benefits require a private self. As we have acquired cultures, symbolic thinking, religions, philosophies, and ideologies, our private selves have grown accordingly to encompass abstract beliefs and ideals. Yet they remain firmly grounded in our individual and separate bodies, which is why, when our privacy is invaded, we feel not only angry and afraid but violated, ashamed, and humiliated.

Be wary, therefore, of those who call for greater openness, especially when they are more powerful than you. Asking, "Cui bono?" may not necessarily produce the answer, "Mihi!" Opening up your private self can be beneficial when trying to build trust, but in competitive conditions, it makes you more easily exploited. Encouraging openness among the powerful — among whom I include the media — is no bad thing. Demanding it of the less powerful, especially when it is not reciprocated, may not benefit them and could worsen their lack of control.

If we do ever find ourselves faced with practical telepathy, arguments like the old canard "Why worry if you’ve nothing to hide?" will undoubtedly be produced, as they have been for every invasion of privacy from the Domesday Book to the CCTV camera. They are bad arguments, using social pressure to disguise the coercion involved. We have private selves for good reason. Openness is in itself neither good nor evil, so anyone wishing to extend it must make their case and show us they can be trusted. We may live in a world of technological prowess, but we are still creatures guided by ancient reciprocities. If you ask for a piece of my self, you must show me that you are fit to take care of it.

Will employers, partners or governments demand access to our minds as a sign of trust? Will the media espouse open access as the must-have accessory? Will market researchers and politicians clamor for access to data which gives new insight into voters and consumers? Should we expect the offense of cognitive rape — non-consensual scanning — to be added to the statute book? And will the technologies be sold as entertainment, therapy, surveillance, or essential survival kit in the brave new world?

The problem is not immediate. Mind-reading technologies, whatever the hyperbole may suggest, will not imminently be joining the arsenal of methods available to governments and companies who wish to render us more predictable. IMCOTT; as we shall see, there is much more work to be done. Nonetheless, such technologies as DNE recording are possible. The rate of development in science is so rapid, and rapidly accelerating, that every day seems to bring a new trophy hauled from the realms of science fiction into the pages of a journal. We barely raise an eyebrow at achievements which would have had people gasping even a mere few decades ago. Mind-reading is just another notch on science’s bedpost.

Except that it isn’t. This is a trophy capable of transforming not only our relations with other people (as the Internet is doing), not only our quality of life (as the car has done), but our innermost selves: What it is to be an individual human being. It may well be with us before we are ready for it. Between now and then, we will undoubtedly hear much about the blessings it could bring us; this chapter has presented only a few. But we also need to look closely at what we may be giving up.

Reprinted from "The Brain Supremacy: Notes From the Frontiers of Neuroscience" by Kathleen Taylor with permission from Oxford University Press USA. Copyright © 2012 by Kathleen Taylor.


By Kathleen Taylor

MORE FROM Kathleen Taylor


Related Topics ------------------------------------------

Books Editor's Picks Neuroscience