How machine learning is helping us to understand the brain

An expert argues that neuroscience is using the wrong metaphors

Published November 25, 2017 5:58AM (EST)

  (Shutterstock/Salon)
(Shutterstock/Salon)

This article originally appeared on Massive.

MASSIVE_logoThe workings of the brain are the greatest mystery in science. Unlike our models of physics, strong enough to predict gravitational waves and unseen particles, our brain models explain only the most basic forms of perception, cognition, and behavior. We know plenty about the biology of neurons and glia, the cells that make up the brain. And we know enough about how they interact with each other to account for some reflexes and sensory phenomena, such as optical illusions. But even slightly more complex levels of mental experience have evaded our theories.

We are quickly approaching the point when our traditional reasons for pleading ignorance – that we don’t have the right tools, that we need more data, that brains are complex and chaotic – will not account for our lack of explanations. Our techniques for seeing what the brain and its neurons are doing, at any given moment, get stronger every year.

But we are using the wrong set of metaphors to describe the entire field, basing our understanding of the brain on comparisons to communications fields, like signal processing and information theory. Going forward, we should leave that flawed language choice behind. Instead, the words and ideas needed to unlock our brains come from a computational field much nearer to real biology: the expanding world of machine learning.

Homines ex machina?

For most of its history, “systems” neuroscience – the study of brains as large groups of interacting neurons – has tried to frame perception, action, and even cognition in terms taken from fields like signal processing, information theory, and statistical inference. Because these frameworks were essential for developing communications technology and data-processing algorithms, they suggested testable analogies for how neurons might communicate with each other or encode what we perceive with our senses. Many discussions in neuroscience would sound familiar to an audio engineer designing an amplifier: a certain region of the brain “filters” the sensory stimulus, “passing information” to the next “processing stage.”

Words of this sort preclude certain assumptions about how we expect to understand the brain. For instance, talking about different stages of processing implies that what goes on at one physical location in the brain can be distinguished from what goes on at another spot. Focusing on information, which has both a lay meaning and a precise mathematical definition, often conflates the two and postpones the question of what an animal actually needs to know to perform a certain behavior.

These borrowed descriptions proved fruitful for a time. Our computer algorithms for processing visual and auditory stimuli really do resemble the function of neurons in some parts of the brain, typically those closest to the sensory organs. This discovery was one of the earliest indications that we might understand the brain through simple, physics-like theories. If neurons really could be said to detect the edges in an image or break sounds down into their component frequencies, why shouldn’t the signal processing analogy extend to higher-level phenomena?

Yet decades of failure to significantly improve on these early models suggest it was the wrong approach. So we must confront a frustrating truth: brains were not designed, as our best algorithms were, to implement mathematically optimal solutions to particular problems; they came about through a blind and unintelligent process. Evolution designs its masterpieces not by epiphany but by persistence, and like any heavily revised project, many of the decisions that made sense along the way are inscrutable now. It follows a single rule: get it right or die trying.

Man as algorithm

Because of this, we need to explain how brains work with evolutionary language. Yet the neural traits that natural selection acts on are the algorithms that the brain uses to convert sensory stimuli into actions, since these final outputs - behavior - are what determine survival. So any theory we use to model the brain must follow computational principles, just not the ones of signal processing.

Fortunately, the mindless, ruthlessly efficient process of natural selection has a twin: machine learning, a form of artificial intelligence (AI) that now outperforms humans at recognizing objects, reading handwriting, and perhaps soon even more complex tasks.

Machine learning algorithms differ from other forms of AI in the same way that evolution differs from intelligent design. In creating an algorithm to perform a certain task – say, recognizing images of hot dogs – an intelligent designer would think carefully about the architecture and the specific computations: how many steps should there be? Should each step look at part of the image or the whole thing? Should it be on the alert for groups of pixels with oblong shapes or reddish hues?

A machine learning method, on the other hand, does away with most of this design process. The algorithm is learned, automatically, based on a learning rule that tells the system how to adjust its internal parameters in order to achieve better performance. Thus the complex details of these methods surround an utterly simple idea: do whatever works, and get better by taking small steps in the right direction. This is like reaching the top of the mountain by always walking slowly uphill, rather than parachuting directly onto the peak in a feat of careful planning.

Evolution – the process that gave rise to octopodes, beaver dams, and our brains – is the canonical “stupid designer.” It has no foresight to know what a hotdog (or a lion, or a squirrel) looks like, so the algorithms the brain uses to identify them must have arisen by taking small steps in the right direction, for millions of years. Like the machine learning rule, this principle is so simple as to sound vacuous: do whatever works. To the extent that monkeys needed to spot lions to survive and reproduce, the genes that construct a brain with a better lion-detector would be favored over the generations, without any designer stating explicitly what a lion looks like.

We don’t yet know the details of how evolution gave rise to such complex and effective neural systems, but one thing is clear: what it lacks in predicting the future, it more than makes up for in flexibility and repurposing of old ideas. When the writers of Silicon Valley converted the hot dog-sensing AI demo into a lucrative porn detector, saving their software developer characters from bankruptcy at the last minute, they captured exactly the spirit of evolution. We tailored our earlier models of the brain’s algorithms with the optimal solutions in mind, but evolution is a lover of sloppy shortcuts. Put another way, our audio amplifiers and digital cameras work very well for their intended tasks because some engineer, at some time, designed each piece of the circuit for a specific purpose. Evolution doesn’t have an ultimate goal in mind.

 

Just because machine learning and evolution appear to use a similar principle doesn’t mean algorithms performed by brain tissue and algorithms performed by computers will resemble each other. But it turns out they do: the architecture that works best for computer vision algorithms is modeled after the visual part of the primate brain. More remarkably, the precise computational outputs learned by trial-and-error – not designed from on high  allow us to predict how real neurons will respond when a monkey looks at images. For all our intelligence, the best hand-crafted visual models, built upon the principles of signal processing theory, have predicted most of these real responses poorly. Even the early, successful models of more basic neural computations are fallingquickly to machine learning models that explain the same data better.

‘The brain is a just-so story, told by evolution’

This empirical relationship between “stupidly” designed brains and machines is astonishing – and it’s telling us something important.

Some of the innovations produced by evolution will have easy, plain-language descriptions. The simple signal processing models may be just such a case, where the sweat-and-dirt solution mostly overlaps with the high-minded design. Yet we should be prepared for a majority of cases where the best explanations, when pulled from their native genetic and mathematical languages to be rendered in English, sound like Zen koans. Why does that neuron respond to faces and boats and cherry pie? Because neurons like that make the brain work.

None of this means we should stop seeking explanations for features of the brain. Instead of trying to put the brain’s algorithms in familiar terms, though – the flat-out false claims that “we now understand how the brain does X” – our new explanations should sound like the ones an evolutionary biologist would give. They must answer: what behavior was the animal under pressure to perform? How much, quantifiably, does the thing we are looking at help with that behavior? And finally, what other pieces need to be in place (neural architecture, plasticity, sensory ability) before this particular feature is useful?

Neuroscientists have shied away from “just-so stories,” but this has always been a perverse term of disparagement. The brain is a just-so story, told by evolution: the problem is figuring out which one. The stampede of successes in machine learning proves that utterly simple rules can be enough to solve complex problems. Reframing the brain in terms of just such a rule – the one that explains how we all got here – may finally bring our theories into alignment with reality.

 

 


By Daniel Bear

MORE FROM Daniel Bear


Related Topics ------------------------------------------

Evolution Massive Science