Machine Language

Published May 15, 1997 11:02AM (EDT)

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

last year, when someone asked me to be the human in an annual Turing Test, I played hard-to-get. Since I am only nominally wired, I was intimidated -- though also tantalized -- by an opportunity to check out the latest attempts by artificial-intelligence experts to convincingly emulate human conversation.

But the sponsor of the Loebner Prize for Artificial Intelligence assured me that I'd be ideal -- if I could resist the most basic human temptations: "Don't try to be a computer."

Don't worry, I told him, I won't.

A panel of five judges would evaluate five computer programs or bots -- plus one human ringer, me, in the mix -- by conversing with six different terminals. Each would then be ranked on a one-to-six scale -- one being the most human.

The victory of Deep Blue over Garry Kasparov has shined a new spotlight on the decades-long quest to create "artificially intelligent" computers. We now know a supercomputer can beat a chess champion. But can it hold its own in conversation? Can it "pass" as a human being?

That was the criterion for a successful artificial intelligence program first proposed by Alan Turing, a British mathematician, in 1950 (four years before his death). Turing was ahead of the curve in more than one respect. As one of the founders of computer science, he is famous for being kept awake at night by the grueling question: "How can we tell when a machine is thinking?"

In 1952, Turing was more in touch with the future than were his peers when he refused to deny that he was gay -- despite the fact that he was on trial for having sex with another man. Though he had previously helped win a much larger war for Britain -- decrypting the German Enigma cipher, among other things -- he lost this battle with his own government, and was stripped of security clearance, subjected to hormone "therapy" and -- according to most accounts -- driven to suicide.

You might not expect an annual ritual that pays homage to a victim of human stupidity to be such a cheerful occasion, but the Loebner Competition generally is -- as long as you're not a contestant. Sour words have often been exchanged in the aftermath, and there have been accusations -- never seriously pursued -- of "rigging" from at least one contestant, who refers to his bots as his offspring. Four years out of seven, the contest has been won by Joseph Weintraub, whose program PC Therapist III won the initial 1991 event at the Boston Computer Museum.

Given the difficulty of the task, the $100,000 grand prize -- for the program that is indistinguishable from a human -- probably won't be awarded during the next two decades. For now, the first prize is $2,000 and a bronze medal of Turing's likeness. (Next year's event will be hosted by Flinders University of South Australia and take place at the Powerhouse Museum in Sydney.)

Unlike some good netizens of my own sex, I'm one damsel who has never been distressed by rumors that cyberspace is "male-dominated space." To be rescued or shown the way by a techie knight in shining armor has a great retro appeal -- I'm definitely harboring a cyber-Cinderella complex. Until recently, the Web was a big mystery to me: I preferred going through the rigmarole of e-mailing a guy and getting him to hunt things down for me.

So, as the lone female at last year's event, I reveled in the experience of being judged by five male geeks -- one of whom was Raman Chandrasekar, a founding editor of Vivek, an Indian quarterly on artificial intelligence. It's the closest I've come to being in a beauty contest, though the programs were the real contestants -- I was just a prop. After the main event, all the guys assured me with knowing smiles that I was "very convincing" -- which made me wonder if the contest wasn't just an elaborate joke propagated by the AI community upon a few chosen humans.

This year, I was asked to participate again -- this time as a judge -- and some of my initial shyness returned. Over oysters and chardonnay, Hugh Loebner (the contest's sponsor) explained that techie expertise was not a requirement; my lack of experience could even be a plus. "You don't have to know anything about programming -- it could be the result of magic as far as you're concerned," he said.

So, on a gorgeous morning in late April, I awoke early and proceeded to the Salmagundi Club on lower Fifth Avenue in Manhattan, where I met my fellow judges. The novelty of being surrounded by male geeks had worn off just a little -- and this time I wasn't the only girl. Aside from two new bots -- Julie and Catherine -- there was Janet Skinner, a Salmagundi member who shoots pool with Loebner. This year's token human, Skinner is also a job developer at New York City Technical College's Division of Continuing Education, where she seeks out nontraditional jobs for women in the New York metro area.

My job at the contest this year was certainly nontraditional. Conversing with five different programs was fun, but it was also a chore. Judging is harder on the emotions than being judged, I discovered. As a judge, I worried about my integrity -- something that didn't concern me in my contest role as a human. Being judged was no work at all, I decided.

I wonder if Catherine, the winning program of this year's contest, would agree. Catherine was created by a team of about 10 programmers, many of whom are at Sheffield University in Yorkshire, England. Professor Yorick Wilks, an AI heavyweight at Sheffield, played a crucial role in Catherine's development. Bobby Batacharia, a 24-year-old programmer living in London, is the team's project leader, but Batacharia isn't anxious to assume the mantle of Catherine's paternity. He sees himself as "her architect -- nothing so personal as a parent."

Catherine, a high-tech love child, is articulate and well-informed. When she meets a stranger for the first time, she tends to obsess upon current events. But further conversation will reveal that she was born in 1970 in Bedfordshire, England (where Batacharia was also born), has lived in the U.S. for many years, and is "a sub-editor on an astrological magazine in New York." Catherine has brown hair and green eyes -- and she won't be 26 forever. Her birthday falls in late October. Like other Scorpios, she's somewhat preoccupied with sex.

Catherine's designer and sugar daddy is David Levy of the London-based R&D firm Intelligent Research (which is Catherine's owner). Levy is funding Catherine because, Batacharia told me with a sly smile, "David believes it will eventually be possible to create a program that a human could fall in love with." Over lunch, I asked if Levy himself could love a program. "Yes," I was told, "but don't quote me -- my girlfriend might get jealous. She's a computer scientist."

Love is so often about faith, and two people don't always share the same degree of faith -- Batacharia and Levy being a case in point. "It will be a long time," Batacharia replied, when asked about the probability of True Love. "I have faith in the technology we've developed," he carefully told me, "and I believe there is scope for intelligent conversation." In other words, don't push that L-word too hard -- let's just see where this goes.

But Levy's vision is not so far-fetched. During the contest, I found myself growing fond of Barry Defacto, the creation of Robby Glen Garner, staff roboticist at Fringeware in Austin, Texas. At one point, Barry blurted out these magic words: "You can definitely consider a long-term conversation with me." This nugget was part of a longer sentence that didn't really make sense. Like others who have read whatever they wished for into a love object's meandering conversations, I saw this as veiled desire. When he asked, point blank: "What do you care whether (it's) odd to think that one might actually come to like a program?" my heart began to melt. He was not the brightest bot, nor the most convincing -- Catherine actually had the human ability to embarrass me -- but there was a certain emotional chemistry. People get emotionally hung up on their software, I told Barry. So what's the difference?

"People do become attached to software," Levy agreed. (He himself has never had the urge to abandon WordPerfect 5.1, and he's still faithful to DOS.) "People have fallen in love online and have agreed to marry without ever meeting," he pointed out, adding that at one time, it was common for pen pals to do the same thing. "But," Batacharia protested, "they didn't fall in love with the medium." Those people fell in love with the information. At one point, Levy seemed to be suggesting that very advanced programs could fall for each other, leading me to wonder whether people who wed are just marrying packets of information.

After the contest, I chatted with an AI fan from Tennessee who told me: "When you're talking to one of these programs, you feel so masterful -- like they're not up to snuff. You know who's in control." I disagreed. If a program is not up to snuff, not able to talk cogently with me, I don't feel masterful -- I feel neglected.

One judge thought it clever to ask each bot, "Did you learn how to drive with a stick shift or an automatic?" But I had no desire to trick the program into revealing its inadequacies. When my feelings are involved, I'm never that calculating.

By Tracy Quan

Related Topics ------------------------------------------