The breakdown of consciousness

Confronted by the discoveries of artificial intelligence, some philosophers are questioning the very minds that keep their profession afloat.

Published November 20, 1998 6:19PM (EST)

Remember Deep Blue, the IBM-produced computer that beat Gary Kasparov last year in a chess match? Red-faced and spent from confronting his "opponent" and its team of experts, the frustrated world champion muttered afterwards, "I'm not afraid to admit that I'm afraid. And I'm not even afraid to say why I'm afraid, because sometimes, you know, it definitely goes beyond any known chess program in the world."

Kasparov's somewhat inchoate musing manages to express a deep-seated feeling among humans when confronted with the deeds of artificial intelligence, commonly known as AI. Did Deep Blue exhibit intelligent behavior in defeating Kasparov? Did it "trick" him on occasion and play "humanlike" strategies? Might Deep Blue's performance have demonstrated a proto-conscious intelligence, one that could grow with bigger and faster computers?

Debates around the meaning of consciousness are among the quirkier instances when intense public and academic interest collide. While few outside of the ivory tower would find much of what philosophers and artificial intelligence theorists write to be bearable reading, recent articles about the logic capabilities of brainy "Kanzi the ape" have charmed all those interested in non-human cognition. Displacing Koko as the darling of the ape studies world, Kanzi's purported symbol-using skills once again remind us that humans should not be considered the only standard for determining what thinking, logic and consciousness are.

What accounts for this convergence? Berkeley neurobiologist Walter J. Freeman believes that our fascination with computers and consciousness is deeply embedded in our cultural repository. "The evolution and maturation of computers," he explains, "has rekindled a very old debate about the possibility of creating a machine that not only thinks, but is aware that it thinks. This is an idea that goes back to the image of God creating man from dust and breathing life into the inert matter, to the golem and to 'Pygmalion,' and now to the digital computer in a robot, the 'giant brain' on wheels."

Reformulating an old 1950s science-fiction theme as well, Deep Blue has once again planted the incubus in the popular imagination of the thinking machine -- a conscious intelligence, smarter than its creators, who sometimes menaces the very civilization that brought it into existence.

But what does this really mean, a "conscious" machine? Not so fast, say the gurus of philosophy and artificial intelligence. For many of them, consciousness itself is a trick -- one that a system (for example, the human body) plays on itself. You may be surprised to learn that a dominant strand in philosophy today claims that consciousness doesn't really exist, or at least not the way we commonly think of it.

The main proponents of this idea, such as Tufts University's Daniel Dennett, tell us that our feeling of self-awareness, and our sense of subjective, inner experiences such as pain, are simply physical states, nothing more. Consciousness, which we commonly think of as something having to do with our minds rather than our bodies, is just another physical accessory that makes it easier for us to get around in the world. The old distinction between mind and body is actually thrown out altogether. Against our everyday understanding of the mind as a kind of pilot that commands the body to do things, many philosophers argue that the "mind as pilot" idea is just a fiction -- created by and subject to physical processes in the brain.

Taking an evolutionary view, Dennett argues that this physical state we call "consciousness" is just one among many beneficial adaptations the human species has made in its struggle to perpetuate itself.

Philosophers often try to lure readers into boring, difficult material by asking them to consider weird situations. Most people have at least heard of the "Brain in the Vat" thought experiment: What if an evil scientist put my brain in a vat and, by stimulating whatever needed to be stimulated, convinced me that I had a body and was experiencing things (such as a sunset on a warm, windy night), when in fact I was just sitting in a vat?

While such mind games exploit a kind of adolescent paranoia most of us non-philosophers have outgrown, they also force us to think about things we take for granted. When it comes to consciousness, everyone assumes that other human beings are all conscious selves with desires and beliefs of their own. But what if there existed automatons that looked just like people, and talked and behaved just like them as well, but were totally unconscious? What if there were zombies among us? Would we be able to tell the difference, and how?

The zombie hypothesis is a crucial one for philosophers, because it calls into question the importance of external behavior as proof of conscious activity. Anyone arguing that a person, animal or thermostat is conscious simply because it exhibits specific traits we associate with consciousness must confront the zombie hypothesis: Maybe behavior alone is just not enough to demonstrate subjectivity.

Common sense indicates that we know when a thing is conscious, because it communicates something to us about what it is feeling. When I accidentally step on my dog's tail and he yelps and is visibly hurt and fearful, I read these yelps as signs that he is able to report something about his inner experience. Doesn't this indicate that he is conscious? Well, perhaps not, say the zombie theorists, because anything might be "programmed" to give the expected responses in any situation. A zombie-dog might "seem" to report his inner state, but would not really be conscious at all.

Now, you may consider the zombies-among-us theory so ridiculous that it hardly warrants the time and energy of our most artful and subtle minds. In that case, you will be happy to learn that Daniel Dennett agrees with you, although not for the reasons you might expect.

Dennett is frankly irritated that anybody still holds on to the idea of consciousness as something other than our physical bodies, something "added onto" or "arising from" physical cause-and-effect relations. The problem with the zombie hypothesis for him is that it presumes that consciousness is some extra thing -- which may well come from having a certain physical structure, like a body and a brain, but which does not itself have any physical effects. But if consciousness has no physical effects, how could anyone prove that it exists or doesn't exist?

Instead, Dennett likes to think that the zombie hypothesis is dumb because it misses his point. If zombies are the same as humans down to every single physical detail, this only goes to show that nobody is really conscious at all, at least in the sense that zombie theorists want to use it. Consciousness is a function of specific physical relations (in our case, developed through evolution), not some mysterious, unknowable "extra."

It may be an ugly and brutish conclusion, but Dennett says that if we are forced to accept consciousness as something nonphysical, then, in fact, "we're all zombies."

Dennett, and those who broadly agree with him, are known as functionalists: "What makes something a mind (or a belief, or a pain, or a fear) is not what it is made of, but what it can do." This means that a mind does not have to be made up of special biological components like neurons and chemicals. In fact, it could be made up of anything (silicon, for example) so long as it was set up in the right way, with the right kind of causal relationships between its parts.

John Searle, who is adept at ridiculing functionalism from a common-sense standpoint, has commented that "nobody ever became a functionalist by reflecting on his or her most deeply felt beliefs and desires, much less their hopes, fears, loves, hates, pains and anxieties." Indeed, functionalism just seems obviously wrong to him. If it truly does not matter what a system is made out of, he wrote in the New York Review of Books, then "the functionalist would be forced to say that all kinds of inappropriate systems have mental states. According to the functionalist view, a system made of beer cans, or ping-pong balls, or the populations of China as a whole, could have mental states such as beliefs, desires, pains, and itches."

But philosophers like Searle are pretty beleaguered right now. Typically, those opposing functionalism will say that it doesn't really explain consciousness, but simply explains it away or, even worse, denies its existence altogether.

And how can one possibly deny the feeling of grief, the tastiness of delicious meat or the pleasure of a good massage? These are all qualitative experiences that don't appear to be easily explained by biology or the functionalist view. There seems to be something about our subjective experiences that just can't be explained by the firing of neurons and chemical properties.

Science may be able to explain many things, these philosophers argue, but it is not yet able to tell us what it feels like to be subject to pain or to take pleasure or to have any other subjective experience. Put another way, science can tell us how certain objects reflect light in such a way as to produce certain effects that, when processed by our brains, we distinguish as "red," but it cannot tell us what it feels like to see red, to experience redness.

This seems like a good argument, because even if a person were to know absolutely everything about the physical properties of objects that produce the "red" effect in our brains, and all of the physical reactions a body can have to those properties, there would still appear to be some crucial piece of information missing -- what is it like to see red?

But functionalists like Dennett blithely dismiss the tired notion that qualitative experiences of colors and pains cannot be explained. Our experience of red simply is the same thing as whatever electrochemical processes are going on in our brain when we encounter a "red" object -- not something extra. These processes are a result of biological evolution, such that our sensory receptors (eyes) evolved to pick out certain properties in the environment that give us helpful information (the apple is "red," hence it is edible). We're defeatists, he says, if we hold on to archaic, unscientific ways of conceiving experience. Dennett has a point; most of us don't want to be reduced to a set of electrochemical happenings in the brain, even if it is the only provable explanation for our inner life.

Unlike the days of Socrates debating the nature of reality with youths in a public square, a large contingent of leading philosophers is deriving its ideas based not on philosophical speculation but on the discoveries and developments in other disciplines, such as evolutionary biology, neurobiology and cognitive science. In some sense, this debate over the existence of consciousness demonstrates the irrelevance of philosophy altogether. What is the use of sitting around musing about what colors are when scientists have shown how they are simply chemical processes in the brain? What is the point of conceptualizing ever-more bizarre scenarios of zombies, brains in vats and alternative worlds when the answer has already been "proven" -- we're programmed by biology?

Indeed, the skeptical Walter Freeman thinks that Anglo-American analytic philosophers have "bankrupted themselves" with their word games. Freeman laments, "By restricting their inquiries to logical systems and rule-based symbol manipulation, [philosophers] have swept aside the ancient questions of value, intentionality, emotion and awareness as unscientific and unworthy of serious debate." Freeman, however, has even harsher words for artificial intelligence symbol-crunchers and functionalists like Dennett. Freeman calls Dennett "an empty vessel" for his dismissal of the importance of concrete, real-world interactions to making intelligent, conscious and willful beings.

But if artificial intelligence gurus ever produce what they say they can produce -- a computer that thinks -- philosophy may really be in trouble. Even so, philosophers who don't buy artificial intelligence will quarrel over how we can know whether a computer is thinking. This means a return to the zombie hypothesis again: Maybe the computer is just exhibiting the right kinds of behavior, but is not experiencing, or thinking, anything at all.

A more sophisticated way that some philosophers are preempting the artificial intelligence strike is by stressing -- like Freeman -- that what the human mind does so differently from any computer is relate itself to a world outside. Computers don't do that. They don't have to deal with ordering the infinite possibilities of everyday lived experience; instead, they only order what is in their limited domain. Hubert Dreyfus, who tangled with Dennett on "Newshour With Jim Lehrer" after the defeat of Kasparov by Deep Blue, insists that real intelligence only comes from embodied beings who live in the world and operate in concrete situations.

Dennett, steadfast in his support for artificial intelligence, thinks that we will be able to reproduce the processes of living in the everyday world in a computer -- though he admits this will be a dauntingly complex task. In fact, he thinks we are well on the way with an MIT experimental "humanoid" robot named Cog, which he gushes about with a kind of boastful paternal pride.

Though the eventuality of a thinking computer is in the distant future (if it's even possible), it is a future that has already been imagined. Kasparov's defeat at the "hands" of Deep Blue was surprisingly big news -- which seems a little weird, because I am constantly defeated by my computer chess program, and expect that I always will be. It's just faster and more powerful than me or any other human being. Why should this strike such a public chord?

Having already imagined conspiratorial computers, "coming to life" and wreaking all kinds of havoc, we take Deep Blue as a sign of the end of our dominion. First Kasparov, then us all!

If computers ever do come to think, they will probably while away their time being puzzled by the same kinds of things we are puzzled by. Perhaps they will spawn their own breed of cyber-philosophers who will contemplate their chips instead of their navels. Are human beings just zombies, computers will ask, or are they conscious too? Why do I experience things like redness and pain? And what is consciousness, anyway?


By Paige Arthur

Paige Arthur is a freelance writer and editor and a Ph.D. candidate in history at the University of California-Berkeley.

MORE FROM Paige Arthur


Related Topics ------------------------------------------

Academia Artificial Intelligence Books College