Saturday, Oct 20, 2012 5:00 PM UTC

Where does language come from?

How do we understand what words really mean? New science suggests we make meaning by creating mental simulations

louder_than_words_rect

 (Credit: _marqs via iStock)

Making meaning is one of the most important things we do. For starters, it’s something we’re doing almost constantly. We swim in a sea of words. Every day, we hear and read tens of thousands of them. And somehow, for the most part, we understand them. Constantly, tirelessly, automatically, we make meaning. What’s perhaps most remarkable about it is that we hardly notice we’re doing anything at all. There are deep, rapid, complex operations afoot under the surface of the skull, and yet all we experience is seamless understanding.

Meaning is not only constant; it’s also critical. With language, we can communicate what we think and who we are. Without language, we would be isolated. We would have no fiction, no history, and no science. To understand how meaning works, then, is to understand part of what it is to be human.

And not just human, but uniquely human. No other animal can do what we can with language. Of course, parts of human language have homologues in other animals. People talk fast, and sentences can be extremely complicated, but zebra finches sing tunes that rival our speed and complexity. Humans can drone on and on, but even a filibustering senator doesn’t outlast humpback whales, whose songs can continue for hours. And although the human ability to combine words in new ways seems pretty unique, it’s seen on a more limited scale in bees, who dance messages to each other that combine information about the orientation, quality, and distance of food sources.

For all these reasons, language has held a privileged spot in science and philosophy throughout history. For centuries, philosophers have asked, what is it that we humans have that our tongue-tied relatives don’t? What cognitive capacities has evolution endowed us with that allow us to understand — and appreciate — sonnets and songs, exhortations and explanations, newspapers and novels?

But for the most part, we’ve failed to answer the most important question of all. Almost no one, from lay people to linguists, really knows how meaning works.

That is, until recently. This is the age of cognitive science. Using fine measures of reaction time, eye gaze, and hand movement, as well as brain imaging and other state-of-the-art tools, we’ve started to scrutinize humans in the act of communicating. We can now peer inside the mind and thereby put meaning in its rightful place at the center of the study of language and the mind. With these new tools, we’ve managed to catch a glimpse of meaning in action, and the result is revolutionary. The way meaning works is much richer, more complex, and more personal than we ever would have predicted.

The Traditional Theory of Meaning

The scientific study of meaning is still in its infancy. But even in the absence of solid empirical evidence, theories about how meaning works have developed and thrived. Over the years, most linguists, philosophers, and cognitive psychologists have come to settle on a particular story that probably isn’t so different from your intuitive sense of meaning. When you contemplate meaning in your daily life, it’s likely because you’re wondering (or perhaps arguing about) what a given word means. It might be a word in your own language: What does obdurate mean? (Stubbornly persistent in wrongdoing, in case you were wondering.) Or it could be a word in another language: What does the formidable German word Geschwindigkeitsbegrenzung mean? (Speed limit.) In general, you’re probably most aware of meaning when you’re thinking about definitions. This is also the starting point for the traditional theory of meaning: Words have meanings that are like definitions in your mind.

What would it be like if meaning worked this way? When you think about it, a definitional meaning would need to have two distinct parts. The first is the definition itself. This is a description of what the word means. It’s articulated in a particular language, like English, and is supposed to be a usable characterization of the meaning. But there’s a second part, too, which is implicit. The definition characterizes something in the world. So speed limit refers to something that exists in real life, independent of your knowledge about it — whether you know that there’s a speed limit, or what it is, you can still get pulled over for driving faster than the number on the sign. So both the mental definition and the actual thing in the world that the word refers to are each critical parts of the meaning of a word.

Many philosophers have taken it as a given that these two parts are all you need to characterize meaning. And they’ve gone on to argue for centuries about which of the two parts is more important—the mental definition or the real world. But the important question for our purposes — to understand how people understand — is to ask how a definitional theory of meaning like this could explain the things we do with language. Do we really have these definitions in our minds? If so, where do they come from? How could we use them to plan a sequence of words? How could we use them to understand something that someone else has said?

This is where things get a little more complicated. As with any definition, your mental definitions would presumably need to be articulated in some language. But what language? Your first thought might be that it should be your native language. Except, when you follow that idea to its logical conclusion, there’s a problem. If English words are defined in your mind in terms of other English words, then how do you understand the definitions themselves? You end up going in circles.

One solution to this problem is to suppose that we have some other system in our mind — some way to encode ideas and thoughts and reasoning that doesn’t use English or any real language. This mental language would need to have a lot of the stuff that a real language has — it would still have to be able to refer to things in the world, as well as properties, relations, actions, events, and so on — anything that we can think about and understand language about. In other words, we might be thinking using something like a language of thought or Mentalese. Simply stated, the language of thought hypothesis is that the meanings of words and sentences in any real language are articulated in people’s minds in terms of this other, mental language. Mentalese is supposed to be like a real language in that there are words that mean things and can combine with one another, but, unlike a real language, it doesn’t sound like anything or look like anything.

But even if Mentalese gets us out of the vicious circle of words defined in terms of other words, it still only gets us part way to meaning. That’s because it doesn’t deal with the other half of a definitional theory of meaning — the things in the world that the Mentalese words refer to. According to the language of thought hypothesis, the words of Mentalese are related to the world through a symbolic relationship.

Over the centuries, this has come to be the leading idea about how meaning works. Words are meaningful because you have mental definitions for them — articulated in Mentalese — that match up to things in the real world.

Embodied Simulation

But if you look a little closer at the language of thought hypothesis, you’ll find that there are actually some holes in it. The biggest one is that Mentalese doesn’t actually solve the problems inherent in a definitional theory of meaning — it simply pushes them back a level. The issue is akin to the earlier question of how an English definition of an English word could ever mean anything. Namely: How do we know what the words in Mentalese mean?

This is one of the big problems with the language of thought hypothesis. And when you start to apply a little pressure, other cracks start to appear. For one, where does Mentalese come from? If it’s something that’s learned, then it certainly can’t be learned through one’s native language, because that creates another vicious cycle: How could we learn Mentalese based on English if we only understand English through Mentalese?

Starting as early as the 1970s, some cognitive psychologists, philosophers, and linguists began to wonder whether meaning wasn’t something totally different from a language of thought. They suggested that — instead of abstract symbols — meaning might really be something much more closely intertwined with our real experiences in the world, with the bodies that we have. As a self-conscious movement started to take form, it took on a name, embodiment, which started to stand for the idea that meaning might be something that isn’t distilled away from our bodily experiences but is instead tightly bound by them.

It’s not clear who had the idea first, but in the mid-1990s at least three groups converged upon the same thought. The idea was the embodied simulation hypothesis, a proposal that would make the idea of embodiment concrete enough to compete with Mentalese. Put simply: Maybe we understand language by simulating in our minds what it would be like to experience the things that the language describes.

Let’s unpack this idea a little bit — what it means to simulate something in your mind. We actually simulate all the time. You do it when you imagine your parents’ faces or fixate in your mind’s eye on that misplayed poker hand. You’re simulating when you imagine sounds in your head without any sound waves hitting your ears, whether it’s the bass line of the White Stripes’ “Seven Nation Army” or the sound of screeching tires. And you can probably conjure up simulations of what strawberries taste like when covered with whipped cream or what fresh lavender smells like. You can also simulate actions.

Now, in all these examples, you’re consciously and intentionally conjuring up simulations. That’s called mental imagery. The idea of simulation is something that goes much deeper. Simulation is an iceberg. By consciously reflecting, as you just have been doing, you can see the tip — the intentional, conscious imagery. But many of the same brain processes are engaged, invisibly and unbeknownst to you, beneath the surface during much of your waking and sleeping life. Simulation is the creation of mental experiences of perception and action in the absence of their external manifestation.

That is, it’s having the experience of seeing without the sights actually being there or having the experience of performing an action without actually moving. When we’re consciously aware of them, these simulation experiences feel qualitatively like actual perception; colors appear as they appear when directly perceived, and actions feel like they feel when we perform them. The theory proposes that embodied simulation makes use of the same parts of the brain that are dedicated to directly interacting with the world. The idea is that simulation creates echoes in our brains of previous experiences, attenuated resonances of brain patterns that were active during previous perceptual and motor experiences. We use our brains to simulate percepts and actions without actually perceiving or acting.

In this context, the embodied simulation hypothesis doesn’t seem like too much of a leap. It hypothesizes that language is like these other cognitive functions in that it, too, depends on embodied simulation.

The idea is that you make meaning by creating experiences for yourself that, if you’re successful, reflect the experiences that the speaker or the writer intended to describe. Meaning, according to the embodied simulation hypothesis, isn’t just abstract mental symbols; it’s a creative process, in which people construct virtual experiences — embodied simulations — in their mind’s eye.

Excerpted with permission from “Louder Than Words: The New Science of How the Mind Makes Meaning” by Benjamin K. Bergen.  Available from Basic Books, a member of The Perseus Books Group. Copyright © 2012.