EXPLAINER

Artificial intelligence research may have hit a dead end

"Misfired" neurons might be a brain feature, not a bug — and that's something AI research can't take into account

Published April 30, 2021 6:00PM (EDT)

Artificial Intelligence robot face is divided in two parts, completion and networking on circuit background (Getty Images)
Artificial Intelligence robot face is divided in two parts, completion and networking on circuit background (Getty Images)

Philip K. Dick's iconic 1968 sci-fi novel, "Do Androids Dream of Electric Sheep?" posed an intriguing question in its title: would an intelligent robot dream?

In the 53 years since publication, artificial intelligence research has matured significantly. And yet, despite Dick being prophetic about technology in other ways, the question posed in the title is not something AI researchers are that interested in; no one is trying to invent an android that dreams of electric sheep.

Why? Mainly, it's that most artificial intelligence researchers and scientists are busy trying to design "intelligent" software programmed to do specific tasks. There is no time for daydreaming.

Or is there? What if reason and logic are not the source of intelligence, but its product? What if the source of intelligence is more akin to dreaming and play?

Recent research into the "neuroscience of spontaneous fluctuations" points in this direction. If true, it would be a paradigm shift in our understanding of human consciousness. It would also mean that just about all artificial intelligence research is heading in the wrong direction.

* * *

The quest for artificial intelligence grew out of the modern science of computation, started by the English mathematician Alan Turing and the Hungarian-American mathematician John von Neumann 65 years ago. Since then, there have been many approaches to studying artificial intelligence. Yet all approaches have one thing in common: they treat intelligence computationally, i.e., like a computer with an input and output of information. 

Scientists have also tried modeling artificial intelligence on the neural networks of human brains. These artificial neural networks use "deep-learning" techniques and "big data" to approach and occasionally surpass particular human abilities, like playing chess, go, poker, or recognizing faces. But these models also treat the brain like a computer as do many neuroscientists. But is this the right idea for designing intelligence? 

The present state of artificial intelligence is limited to what those in the field call "narrow AI." Narrow AI excels at accomplishing specific tasks in a closed system where all possibilities are known. It is not creative and typically breaks down when confronted with novel situations. On the other hand, researchers define "general AI" as the innovative transfer of knowledge from one problem to another.

So far, this is what AI has failed to achieve and what many in the field believe to be only an extremely distant possibility. Most AI researchers are even less optimistic about the possibility of a so-called "superintelligent AI" that would become more intelligent than humans due to a hypothetical "intelligence explosion."     


Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.


Computer Brains? 

Does the brain transmit and receive binary information like a computer? Or, do we think of it this way because, since antiquity, humans have always used their latest technology as a metaphor for describing our brains?

There are certainly some ways that the computer-brain metaphor makes sense. We can undoubtedly assign a binary number to a neuron that has either fired "1" or not "0." We can even measure the electrochemical thresholds needed for individual neurons to fire. In theory, a neural map of this information should give us the causal path or "code" for any given brain event. But experimentally, it does not. 

For starters, this is because neurons do not have fixed voltages for their logic gates like transistors that can determine what will activate "1" or not activate "0" in a given neuron. Decades of neuroscience have experimentally proven that neurons can change their function and firing thresholds, unlike transistors or binary information. It's called "neuroplasticity," and computers do not have it.  

Computers also do not have equivalents of chemicals called "neuromodulators" that flow between neurons and alter their firing activity, efficiency, and connectivity. These brain chemicals allow neurons to affect one another without firing. This violates the binary logic of "either/or" and means that most brain activity occurs between an activated and nonactivated state.

Furthermore, the cause and pattern of neuron firing are subject to what neuroscientists call "spontaneous fluctuations." Spontaneous fluctuations are neuronal activities that occur in the brain even when no external stimulus or mental behavior correlates to them. These fluctuations make up an astounding 95% of brain activity while conscious thought occupies the remaining 5%. In this way, cognitive fluctuations are like the dark matter or "junk" DNA of the brain. They make up the biggest part of what's happening but remain mysterious.   

Neuroscientists have known about these unpredictable fluctuations in electrical brain activity since the 1930s, but have not known what to make of them. Typically, scientists have preferred to focus on brain activity that responds to external stimuli and triggers a mental state or physical behavior. They "average out" the rest of the "noise" from the data. However, precisely because of these fluctuations, there is no universal activation level in neurons that we can call "1." Neurons are constantly firing, but, for the most part, we don't know why. 

What might be the source of these spontaneous fluctuations? Recent studies in the neuroscience of spontaneous thought suggest that these fluctuations may be related to internal neural mechanicsheart and stomach activity, and tiny physical movements in response to the world. Other experiments by David McCormick at Yale University School of Medicine in 2010 and Christof Koch at Caltech in 2011 have demonstrated that neuronal firing creates electromagnetic fields strong enough to affect and perturb how neighboring neurons may fire.

The brain gets even wilder when we zoom in. Since electrochemical thresholds activate neurons, a single proton could, in principle, be the difference that causes a neuron to fire. If a proton spontaneously jumped out of its atomic bonds, in what physicists call "quantum tunneling," this could cause a cascade of sudden neuron activity. So even at the tiniest measurable level, the neuron's physical structure has a non-binary indeterminacy

Computer transistors have the same problem. The smaller manufacturers make electronics, the smaller the transistor gets, and the more frequently electrons will spontaneously quantum tunnel through the thinner barriers producing errors. This is why computer engineers, just like many neuroscientists, go to great lengths to filter out "background noise" and "stray" electrical fields from their binary signal. 

This is a big difference between computers and brains. For computers, spontaneous fluctuations create errors that crash the system, while for our brains, it's a built-in feature.    

The future of AI is not what you think

What if noise is the new signal? What if these anomalous fluctuations are at the heart of human intelligence, creativity, and consciousness? This is precisely what neuroscientists such as Georg NorthoffRobin Carhart-Harris, and Stanislas Dehaene are showing. They argue that consciousness is an emergent property born from the nested frequencies of synchronized spontaneous fluctuations. Applying this theory, neuroscientists can even tell whether someone is conscious or not just by looking at their brain waves

AI has been modeling itself on neuroscience for decades, but can it follow this new direction? Stanislas Dehaene, for instance, considers the computer model of intelligence "deeply wrong," in part because "spontaneous activity is one of the most frequently overlooked features" of it. Unlike computers, "neurons not only tolerate noise but even amplify it" to help generate novel solutions to complex problems.  

"Just as an avalanche is a probabilistic event, not a certain one, the cascade of brain activity that eventually leads to conscious perception is not fully deterministic: the very same stimulus may at times be perceived and at others remain undetected. What makes the difference? Unpredictable fluctuations in neuronal firing sometimes fit with the incoming stimulus, and sometimes fight against it."

Accordingly, Dehaene believes that AI would require something akin to synchronized spontaneous fluctuations to be conscious. Johnjoe McFadden, a Professor of Molecular Genetics at the University of Surrey, speculates that spontaneous electromagnetic fluctuations might even have been an evolutionary advantage to help closely packed neurons generate and synchronize novel adaptive behaviors. "Without EM field interactions," he writes, "AI will remain forever dumb and non-conscious." The German neuroscientist Georg Northoff argues that a "conscious…artificial creature would need to show spatiotemporal mechanisms such as… the nestedness and expansion" of spontaneous fluctuations.  

Relatedly, Colin Hales, an artificial intelligence researcher at the University of Melbourne, has observed how strange it is that AI scientists have not yet tried to create an artificial brain in the same way other scientists have made artificial hearts, stomachs, or livers. Instead, AI researchers have created theoretical models of neuron patterns without their corresponding physics. It is as if instead of building airplanes, AI researchers are designing flight simulators that never leave the ground, Hales says.

How might the recent science of spontaneous brain fluctuations change our way of thinking about AI? If this contemporary neuroscience is correct, AI cannot be a computer with input and output of binary information. Like the human brain, 95% of its activity would have to be "nested" spontaneous fluctuations akin to our unconscious, wandering, and dreaming minds. Goal-directed and instrumental behaviors would be a tiny fraction of its developed form. 

If we looked at its electroencephalogram (EEG), it would have to have similar "signatures of consciousness" to what Dehaene has experimentally shown to be necessary. Why would we expect consciousness to exist independently of the signatures that define our own? Yet, that is what AI research is doing. AI would also likely need to make use of the quantum and electrodynamic perturbations that scientists are presently filtering out.

Spontaneous fluctuations come from the physical material of embedded consciousness. There is no such thing as matter-independent intelligence. Therefore, to have conscious intelligence, scientists would have to integrate AI in a material body that was sensitive and non-deterministically responsive to its anatomy and the world. Its intrinsic fluctuations would collide with those of the world like the diffracting ripples made by pebbles thrown in a pond. In this way, it could learn through experience like all other forms of intelligence without pre-programmed commands. 

If it's true that cognitive fluctuations are requisite for consciousness, it would also take time for stable frequencies to emerge and then synchronize with one another in resting states. And indeed, this is precisely what we see in children's brains when they develop higher and more nested neural frequencies over time.

Thus, a general AI would probably not be brilliant in the beginning. Intelligence evolved through the mobility of organisms trying to synchronize their fluctuations with the world. It takes time to move through the world and learn to sync up with it. As the science fiction author Ted Chiang writes, "experience is algorithmically incompressible." 

This is also why dreaming is so important. Experimental research confirms that dreams help consolidate memories and facilitate learning. Dreaming is also a state of exceptionally playful and freely associated cognitive fluctuations. If this is true, why should we expect human-level intelligence to emerge without dreams? This is why newborns dream twice as much as adults, if they dream during REM sleep. They have a lot to learn, as would androids.

In my view, there will be no progress toward human-level AI until researchers stop trying to design computational slaves for capitalism and start taking the genuine source of intelligence seriously: fluctuating electric sheep.


By Thomas Nail

Thomas Nail is Professor of Philosophy at the University of Denver and author of numerous books and articles, some of which can be read online here. Find him on Twitter or on his blog.

MORE FROM Thomas Nail


Related Topics ------------------------------------------

Artificial Intelligence Consciousness Explainer Neuroscience Philosophy