Sentience is a slippery concept. Since there's little agreement on the fundamentals and mechanics of consciousness, the question "Can computers think?" invariably descends into circular and fruitless debate. How can we decide if a computer can think until we decide what thinking is in the first place?
The late, great Alan Turing -- generally considered a giant of computing theory and artificial intelligence -- neatly sidestepped this theoretical gridlock by proposing his own functional definition: A computer that responds with enough subtlety and complexity to convince users that it's human is said to pass the Turing Test, and thus meets the standard for true machine intelligence.
Turing Tests are commonly staged as chat-style conversations, since natural language -- with all the subtleties of syntax, semantics and wordplay -- is widely intuited as an unmistakable earmark of real intelligence. There have been many attempts by artificial intelligence researchers to create language generators that pass the Turing Test. One example is the Eliza program -- a popular, decades-old conversation module written to mimic a psychotherapist. A recent exchange between me and Eliza went like this:
Me: Hi, Eliza.
Eliza: How do you do? What brings you to see me?
Me: Not much. How are you today?
Eliza: Does it bother you that are me today?
Me: Uh, excuse me?
Eliza: Why do you say uh excuse you?
Me: You're not really trying very hard, are you?
Eliza: Earlier you said are me today?
Me: Are you trying to tell me I'm crazy or something?
Eliza: Maybe your plans have something to do with this.
Me: Is this visit covered by my HMO?
Eliza quickly betrays herself with the wooden and brittle conversational style of the machine. Clearly she was just parsing my words (in this case, badly) and reflexively shoveling them back, lightly repackaged as questions -- a cheap ruse on the part of the designer to make her appear interested or contemplative. When my statements exceeded a threshold of complexity, Eliza changed the subject -- bad couch-side manner, to be sure, and enough to make me consider therapeutic alternatives like Prozac or electroconvulsive shock. At any rate, Eliza roundly flunks the Turing Test; she is transparently digital and less than scintillating company to boot.
On the other hand, a recent exchange in an AOL chat room was sufficient to convince me that my interlocutor was all too human:
Me: How are you today?
BiteMe100Times: What's it to you? Unless you're my mother or my shrink, you can fuck off.
Me: Just trying to be friendly.
BiteMe100Times: Yeah, but I want a commitment.
Me: Actually, I'm writing a magazine piece on the Turing Test. I'm trying to figure out if you're human. You could be a machine, you know.
BiteMe100Times: Oh, sure I'm human. Two plus two is four. Four times four is 16. Four to the 16th power is [core dumped]
Me: Very clever.
BiteMe100Times: What the hell do you expect? I'm running Windows NT. Now go away.
My chat room partner, unlike Eliza, strongly displayed many of the key features of carbon-based consciousness: sarcasm, irony, misdirected hostility, frustration with Microsoft. Definitely a real person. I took comfort in the proposition that my humanity was secure; differentiating a person from a machine was trivial, even in an Internet chat room.
This confidence was short-lived. Sometime afterward, I attended a cocktail reception thrown by a group of implausibly self-actualized, 20ish Silicon Valley Internet entrepreneurs. But they were real people, I was pretty sure. At least they looked human. Certainly they swilled martinis and slurred their speech and bounced off the walls in that distinctly homo erectus kind of way.
But, as I was beginning to get a little bored by all the hobnobbing and elbow pressing, my mind drifted back to the Turing Test. I decided to play a little game with myself (I am, if nothing else, easily amused) by inverting it: What if I ignored my co-imbibers' hardware -- their bodies -- and abstracted our conversations into pure text, like Eliza? How would they stand up to the Turing Test? I began to project a stream of luminous, disembodied text onto the inside of my forehead. (The tequila helped a lot here, believe me.)
The results were more than a little unsettling. Excerpted from forehead:
Me: Hi there.
Martini25: What's your name?
Me: Tom. Great party. What are you drinking?
Me: I've never had one of those. Is it a vodka drink?
Martini25: Our company's moving out of a distributed, hierarchical B-to-B architecture and into a peer-to-peer, B-to-G back-end strategy that's going to ignite our share price and send it back into orbit.
Me: Um, can we back up here? I got lost at B-to-something ...
Martini25: Do you have a card?
Me: I left them in the car. In my briefcase.
Martini25: You left them in the car? You should check out our Web site. It's at www.BtoGeewhiz.com. What's your name?
To be fair, it was a really noisy party. But I was pretty spooked; Martini25 seemed to be a dot-com upgrade of Eliza: stock phrases, recycled language, trouble switching contexts, an eerie inability to connect with my side of the conversation -- a finite-state machine in jeans and a black turtleneck, clutching a cocktail and spouting URLs. Disoriented, I stumbled toward an attractive brunet, hoping for a less automated exchange.
Me: Hey there. I'm a little freaked out by this scene. Could you humor me for a minute and do something to reassure me that you're actually human?
CalSpritzer: Excuse me?
Me: Are you really human?
CalSpritzer: Why are you asking if I'm really human?
Me: Oh, don't do that! Yowch! Wait -- sorry. My fault. Start again. I'm Tom. What's your name?
CalSpritzer: I'm Penny. Web designer. I consult to content portals to increase their stickiness.
Me: Nice. I like to mooch free drinks from budding Internet moguls.
CalSpritzer: You should check out my site. It's at www.WebGlue.net. Do you have a card?
I decided to get out of there before I lost it and made a scene. My grip on my own humanity was in need of an increase in stickiness. I decided to consult a therapist.
It didn't help:
Me: I'm having doubts about my humanity.
Eliza: Why do you say you are having doubts about your humanity?
Me: It was something about a peer-to-peer B-to-G back-end strategy.
Eliza: What makes you believe that?
Me: Well, it's supposed to ignite shareholder value. That's good, right?
Eliza: Maybe your plans have something to do with this.
Me: Do you have a card?
Later, I reflected, maybe the problem is that the Turing Test just isn't so relevant anymore. Perhaps it's just a charming theoretical anachronism, an artifact of a distant time (Turing did his best work in the 1930s, after all) in which computers themselves were mostly theoretical. In this new millennium of wireless communications and ubiquitous computing, maybe the big challenge isn't for a computer to simulate a human. Maybe the real challenge is for people to prove they aren't machines.