We could be reading minds soon: Inside the research that's moving us from sci-fi to sci-fact

Adam Piore, author of “The Body Builders,” talks to Salon about the science behind decoding brain waves

Published April 17, 2017 10:58PM (EDT)

 (Shutterstock/Salon)
(Shutterstock/Salon)

Billionaire magnate Elon Musk is trying to fill the world with electric cars and solar panels while at the same time aiming to deploy reusable rockets to eventually colonize Mars.

As if that weren't enough for his plate, Musk recently announced the launch of Neuralink, a neuroscience startup seeking to create a way to interface human brains with computers. According to him, this would be part of guarding humanity against what Musk considers a threat from the rise of artificial intelligence. He envisions a lattice of electrodes implanted into the human skull that could allow people to download and upload thoughts as well as treat brain conditions such as epilepsy or bipolar disorders.

Musk’s proposition seems as outlandish and unlikely as his vision for the Hyperloop rapid transport system, but like his other big ideas, there’s real science behind it.

Figuring out what's really involved in efforts to sync brains with computers was part of what inspired Adam Piore to write “The Body Builders: Inside the Science of the Engineered Human,” which was released last month by HarperCollins.

Written in plain language that gives nonscientists a way to separate the science from the sensational, “The Body Builders” is a fascinating dive into what’s happening right now in bioengineering research — from brain-computer interfaces to bionic limbs — that will redefine human-machine interactions in the years to come.

Piore, an award-winning journalist who has written extensively about scientific advances, spoke to Salon recently about just how close we are to being able to read one another’s thoughts through electrodes and the processing power of modern computers. The transcript below was lightly edited for style and clarity.

In your research, what were some of the innovations you learned about that blew your mind?

Most of them blew my mind at some point, but the one that really stuck out [dealt with] the things people are doing with reverse engineering the way the human leg works so they can build a bionic limb. In order to do that, Hugh Herr at MIT is building a mathematical model of the way that all of the constituent parts of the lower leg interact.

There’s only a few hundred muscles, ligaments, tendons and bones that constitute the lower leg, so that’s manageable to have the sensing power to characterize that, express it mathematically, put that on a computer chip and then build robotic parts that can do that or build some exoskeleton device that can work in harmony with that. If you take that to the extreme, one of the biggest challenges is the human brain where [the experimental technology involved is] basically doing the same thing except with billions of neurons.

One of the people that I profile was a guy by the name of Gerwin Schalk in Albany at the Wadsworth Center. He’s trying to decode imagined speech. That was pretty mind-blowing. They’ve discovered that when you speak, you send signals not just to the brain’s motor cortex to tell your muscles how to make the sound but also to the auditory cortex as an error-correction mechanism. And even when you’re not speaking, just thinking the words, the words still go to your auditory cortex, so Gerwin Schalk has been able to find a neural signature of this and identify different phrases.

Your book describes how Schalk re-created a muddled but clearly recognizable segment of the Pink Floyd song “Another Brick in the Wall” based solely on brain wave data collected from people who had listened to the sound clip. What is the practical application of this?

The ultimate goal is to be able to decode imagined speech. That was a demonstration showing that you could detect the music playing in somebody’s auditory cortex. But theoretically if you have the processing power and the sensing capabilities, you could detect something much more specific, like the actual words that somebody is thinking.

And if you could do that then you could help locked-in patients regain the ability to talk just by thinking. You could build a thought helmet, which was the original kind-of cockamamy scheme by the person who originally funded Gerwin Schalk. There was a guy in the Army Research Office who [provided funding for Schalk’s research] because he wanted to build a thought helmet that he had read about in science fiction books so that soldiers could communicate telepathically. It seemed outlandish at the time, but now it seems like someday it might be possible.

It seems like a Faustian bargain to have technology that could read people’s minds. Has anyone discussed the notion that someday authorities could prosecute people based on thoughts they have in their minds?

They’ve definitely explored the ethical dilemmas, but they’re a long way from being able to do that. If you are actually going to be able to have a thought helmet, even if you could do it the way it’s conceptualized for the military or to help locked-in people speak, you would need to train the pattern-recognition software.

It really wouldn’t work without the cooperation [of the subject]. The way words are encoded in each person’s brain differs from person to person. The software and the hardware would need to be trained on your own specific brain before it could actually pick out words and phrases.

But there are all sorts of ethical questions raised by these technologies, and one can imagine all sorts of "1984"-ish type mind-control issues, and they’re definitely worth exploring and discussing.

So what you’re saying is that the each human brain has a distinct “accent,” that we all process words differently in our minds?

The brain is the most complicated pattern-recognition machine out there, and the way that different words and patterns are encoded in our brains is the result of our experiences. The brain is very plastic. It can actually even change in a person over time.

You’ve said we need a technological breakthrough to decode language from brain waves. What do you mean?

So there’s a guy at Northwestern named Konrad Kording who published a paper in 2011 in Nature Neuroscience detailing what he called Stevenson’s Law, named after his graduate student Ian Stevenson; it’s like Moore’s Law for computing chips. [Stevenson] had looked at the number of neurons that scientists can record from, and basically it’s doubled about every seven years.

But it’s only about 500 [neurons] at this point. Kording said we’ll be dead before we can record even part of a mouse brain. So what they’re doing is there’s this program from [the U.S. Defense Advanced Research Projects Agency, or DARPA] called Neural Engineering System Design. They’re doling out about $60 million trying to get some sort of breakthrough. They want a device that can record from at least 100,000 neurons and also stimulate them. But it’s hard to do. We need to develop a new way to do this.

There’s a group of people at Berkeley who have suggested that the solution is to have something called neural dust, which is nano-scale electrodes you can put in the brain. People have said the solution is to just shrink existing electrodes to make them smaller. Some people have compared [the current technology] to trying to play piano with your forearms. You can’t get the resolution you want. There’s a paralyzed woman who drank coffee with a robotic arm, controlling it just by thinking, and that was remarkable.

But as one of the neuroscientists said to me, there’s no [brain-computer interface] that you would want to use to control a wheelchair on the edge of a cliff or to drive a car in heavy traffic. It’s not precise enough.

This sounds like a similar problem in robotics. Robots can do a lot of things, but some tasks are too intricate and detailed for a robot to do — at least not yet.

We’ve crossed the Rubicon, but we haven’t yet perfected the technology. That’s why in my book I also looked at technologies that are affecting people’s lives, like the [bionic leg research]. It’s the same kind of idea because you’re reverse engineering the human body and mind. There are a lot of remarkable stories of people being able to walk again.

It’s also the same with genetic engineering. We can now decode a human genome for under $1,000. But the fact is a lot of human diseases and human qualities like intelligence grow out of the interaction of many different genes and environmental factors. We’re still learning how to decode those. We’re able to do genetic therapy but not complicated genetic therapy.

What are researchers telling you about the science behind Elon Musk’s recent comments and predictions about merging human and artificial intelligence, about downloading and uploading thoughts?

In my book, I try to tell stories about things that are going on now. There are a lot of books that vaguely talk about the future, but I wanted to explain how the science works and what’s actually happening now so that people can evaluate these claims and see what’s sensationalistic and what’s not.

But Gerwin Schalk, who’s working on imagined speech, believes his research is just one guidepost on route to an even grander endpoint. He believes that in the not too distant future that we’ll be able to seamlessly integrate the human mind and all of humanity with computers so that we won’t need a keyboard or a mouse to type something into the web to get an answer. We’ll be able to just think and we’ll have instant access to every fact available on the web as if it was a memory or something. He says you’d have a billion people all hooked in, and there’s no social media; everyone would just know what you’re about and who you are and suddenly you’d create this super society, and it would clearly transform not only human capacity but also what it means to be human.

I think that’s relevant to what Elon Musk is talking about. He’s worried about artificial intelligence, about machines destroying humanity. One of the reasons why he’s pushing for this neural lace, which would be to overcome that challenge that I was talking about earlier, which is the same kind of thing that DARPA is funding, [is] to try and find better sensors to overcome Stevenson’s Law. One of the reasons why Elon Musk wants to do this is so that we can link up to computers and have the same computational power and the same hive mind and the same type of intelligence that artificial intelligence would have so that we can basically protect ourselves.


By Angelo Young

MORE FROM Angelo Young


Related Topics ------------------------------------------

Adam Piore Editor's Picks Elon Musk Neuralink Neuroscience The Body Builders