If Hugh Loebner’s contest is just hokum, and the Turing test has outlived its usefulness, why should we care about it or its various squabbling participants?
A vocal camp in the brainy “philosophy of mind” profession believes that the Turing test should be relegated to the history books, but I’m going to assert axiomatically that the test, as it is generally understood by ordinary humans like you and me, is interesting. The question of whether computers can successfully pose as human beings has obsessed writers, filmmakers and computer scientists for decades. Therefore, without getting sucked into a philosophical vortex about the nature of minds, machines, intelligence and so forth, all we need to find out — if we want to know if the Loebner competition matters — is whether there exists a more respectable variant of the Turing test. As far as I can determine, there doesn’t. The Turing test is, as it were, state-of-the-art.
But instead of buckling down to meet the challenge that Loebner poses, the artificial intelligence community has made a consistent effort to change the rules — to do away, even, with the very name of their own discipline.
Neil Bishop, the organizer of the 2002 Loebner competition, summed it up as follows:
“In the professional and academic circles the term Artificial Intelligence is passé. It is considered to be technically incorrect relative to the present day technology and the term has also picked up a strong Sci-Fi connotation. The new and improved term is Intelligent Systems. Under this general term there are two distinct categories: Decision Sciences (DS) and the human mimicry side called Mimetics Sciences (MS).”
Decision sciences, by the simplest possible definition, refers to computerized assistance in resource allocation. An example provided by a press release from MIT announcing the creation of a decision sciences program was “complex computer-based ‘passenger yield management’ systems and models that the airlines use to adjust pricing of each flight’s seats in order to maximize revenue and profitability to the airline.”
That’s a far cry from the bold claims made by A.I. visionaries in decades past. But focusing on such systems has a signal advantage for scientists who have been failing miserably at the Turing test. It gets them off the hook. As James H. Moor, of Dartmouth College’s department of philosophy and the organizer of the 2000 competition, wrote:
“The Turing test is not very useful for many A.I. scientists today because they work on projects that have nothing to do with human linguistic performance.”
But Moor did concede that Alan Turing’s challenge is still worth chasing: “Nevertheless, the Turing test will remain a philosophically interesting test and a long range challenge for A.I. If a computer could routinely converse with us as well as Deep Blue could play chess and we had no reason to believe some kind of trickery was involved, how could we deny it had at least some intelligence?”
Even as recently as November 2002, the influential IBM Systems Journal featured a technical forum on “Machine Intelligence and the Turing Test”; but its only mention of Loebner was in a footnote:
“A formal [Turing test] yearly contest, sponsored by Hugh Loebner and The Cambridge Center for Behavioral Studies, accords a $2000 prize and medal to the most human-like computer contestant. Among the most well-known critics of the contest is Marvin Minsky … Minsky has wittily sponsored a “Minsky Loebner Prize Revocation Prize.”
I don’t know about you, but I find this sycophancy embarrassing. As to the antipathy to the Loebner competition from the A.I. establishment, Neil Bishop confirmed my impressions:
“The hard-core DS types like Professor Minsky firmly believe that the ‘Holy Grail’ (cognitive understanding and response) can only be realized through the DS approach. As a result they have very little respect for the mimetics (human mimicry) side of the equation. Also just by its nature the MS side embraces the general public’s view of the old A.I. term. After all if you can talk to an artificial person and it responds in a human-like manner, who cares if it is actually ‘thinking’ or just doing a damn good job of fooling you? And if you look at Turing’s original concept that is really all that is needed to win. As a result, the DS camp seems to think that mimetics are undermining their image. Particularly since there have been many bold projects and claims in the DS camp which have failed. This is where the rift spawns. Then you take the apparently fragile personalities of key players in both camps, well, to put it bluntly you end up with a childish display of emotions at the least, and at times a real ‘bitch fight’ will get started.”
In other words, if you read between the lines what you come up with is that one reason “serious” A.I. scientists don’t try to mimic human speech anymore is that they discovered they can’t do it. Of course, they promised 30 years ago that they would be able to do so “real soon now,” but it has turned out to be harder than expected, so now it’s dismissed as mere mimicry.
Although Google can find prominent mention of Daniel Dennett’s three-year tenure as the chair of the original Loebner competition committee on earlier versions of his personal Web site, there is no mention of it on the site now. Perhaps, I thought, this whole business is so old that he’s forgotten about it. Not so.
“I have a very clear memory of how I came to resign … Danny Bobrow and I put together the idea of a revision of the rules (as described in “Brainchildren,” p. 29). But when we put the idea to Loebner, he would have none of it … If the Chairman of the Prize Committee makes a carefully thought-out proposal about how to salvage the competition, and it is summarily rejected, there is really nothing left to do but resign, since my opinion apparently was not considered worth serious discussion. Which is what I did.”
Dennett’s commentary in his book “Brainchildren” is telling. He explains that “serious contestants” from “the world’s best A.I. labs” aren’t interested because “passing the Turing Test is not a sensible research and development project for serious A.I. It requires too much Disney and not enough science.”
Does that sound as snotty to you as it does to me? Well, it gets better:
“We might have corrected that flaw,” wrote Dennett, “by introducing into the Loebner Competition something analogous to ‘school figures’ in ice-skating competition: theoretically interesting (but not crowd-pleasing) challenges such as parsing pronouns, or dealing with enthymemes (arguments with unstated premises). Only those programs that performed well in the school figures — the serious competition — would be permitted in the final show-off round, where they could dazzle and amuse the onlookers with some cute Disney touches. Some such change in the rules would have wiped out all but the most serious and dedicated of the home hobbyists, and made the Loebner Competition worth winning (and not too embarrassing to lose).”
Let’s forget Turing’s actual test, he says; let’s rather find a way to eliminate competitors that don’t come from the best A.I. labs! Having done that, we can toss off a few cheap tricks to amuse the people who are not as clever as we are.
Speaking for myself, I think Sarah Hughes is a god. Her gilt-medal performance in the 2002 women’s figure skating competition was one of the most breathtaking long programs I’ve ever seen, and I could not care a fig about whether she could pass her “school figures.”
But when I pressed Dennett on the mainstream A.I. community’s rejection of Loebner, he replied:
“Why should ‘academic A.I.’ take Loebner seriously, when he persists in running a competition that still doesn’t test the linguistic abilities that a serious language comprehension system must have? Don’t expect aeronautical engineers to be interested in high-jump competitions.”
“I may be missing something, but it sure seems to me that [Loebner's] main mistake has been in not belonging to the right club,” I answered. “He’s brash, he’s zany, he hangs out with hookers, he makes disco floors for a living — he doesn’t teach at MIT or write books on the nature of consciousness. As far as I can tell, that is the main reason that the Loebner Prize is not embraced by the ACM Turing Award crowd. That and the fact that A.I. had a two-decade history of overpromising and underdelivering, which his prize showed up in neon.”
“Well, I’ve given you the reason,” replied Dennett. “Think about it: If you and your lab/team had devoted years to developing a truly competent language-comprehension system, but it could be beaten by somebody’s cheezo hobby system because the rules didn’t permit putting a real strain on the competitors, you wouldn’t enter that competition. You wouldn’t take that competition seriously. You don’t enter your Ferrari in a ‘race’ to the bottom of the mountain that can be won by the first car that drives over the cliff and lands upside down on the finish line…”
The last communication I had from Dennett simply said, “Loebner couldn’t even consider postponing the contest for a year or so even if that was the only way to make it respectable. Too bad for him and his reputation. We tried.”
It didn’t occur to me until later to point out that a computer program is not an automobile and that the only real risk of entering and losing the competition is embarrassment.
The animosity expressed by luminaries like Dennett and Minsky only makes things harder for the Cambridge Center for Behavioral Studies. How is it supposed to line up prestigious sponsors when its patron insists on getting in mud fights with widely respected scientists? Which, by the way, speaking of the Cambridge Center, brings up more questions. Who are those guys? And how the heck did they get caught up in all this?
I asked Loebner, and this was his answer: “The purpose of the CCBS is to apply the techniques of behaviorism and behavior modification (operant conditioning — Skinner box etc.) to human problems, thereby ameliorating them. I came to this understanding after Robert [Epstein] asked me to let the CCBS run the contest. At the time he asked, I really couldn’t understand the reason for the CCBS’s existence and thought that the Loebner Prize would provide a raison d’être.”
In another e-mail to me, Loebner said that he had personally kept the center afloat during some rough patches. So, according to Loebner, the Cambridge Center and the Loebner contest were simply each other’s fig leaf. I decided to get the center’s side of the story.
It took a bit of persistence on my part, but eventually Dwight Harshbarger, the center’s executive director, agreed to take my call. He is a soft-spoken, courteous man with a slow Southern way of speaking, and he took his time formulating answers to my questions before responding. In other words he is the very antithesis of Hugh Loebner.
It was clear that the Loebner Prize was not a comfortable topic for him. He acknowledged that the 2002 contest had not gone well, and that the center was actively looking for a host for the 2003 competition. Discussions were at a delicate point with two potential sponsors, he told me. He did not want to say anything to a reporter that might disrupt them. After all, the center has a long history of having sponsors back away from hosting the competition. Harshbarger assured me that he, and the center, would like nothing better than to be able to announce a date and venue for the 2003 competition, if for no other reason than it was a distraction taking their efforts away from the center’s main mission. There was no need for him to add that it would also get Hugh Loebner off his back.
So what exactly was the Center’s mission? Well, it was to promote behavioral studies, he said.
Here I must admit to a certain amount of head-scratching. I had already been to the center’s Web site and still had no real idea what the place was all about. For example, here is what the Web site has to say on the subject of “verbal behavior.”
“A great deal of our interactions with others involves verbal behavior, and many people are interested in what happens when you talk to someone. Your behavior when you are speaking is called verbal behavior, and the behavior of the person or persons listening to you (if they respond in some way to what you’ve said) is called verbally governed behavior.”
Hmmm. Many people are interested in what happens when you talk to someone.
Dr. Harshbarger explained to me that Loebner had indeed given $125,000 to the center, and that under the terms of Loebner’s gift the center must run the contest and keep the prize money until it’s time to award it. The center also gets to keep, permanently, the interest that accrues on the $125,000 as the years go by and bots fail to pass the full Turing test.
I asked Harshbarger why the Cambridge Center didn’t just run the competition itself, as Loebner suggested. “We’re just not set up for that,” he said. “It requires a fair amount of equipment and expertise to do it right.” I replied, “Hugh Loebner says it’s not complicated at all, that he could run it by himself in his apartment.”
Harshbarger laughed a sad laugh, and I could just imagine him holding his head in his hands.
Managing the competition seemed to me an awful lot of work and aggravation merely to earn the annual “gift” of the interest, I suggested. Harshbarger waited a long time before answering “I don’t have any comment about that,” he said, finally. So I asked him why the center didn’t just return the gift to Loebner. He did answer that question for me, but not on the record.
I have since confirmed that the center is indeed actively seeking to give Loebner’s gift back to him. This is turning out to be more difficult than one might imagine.
In a letter dated Dec. 24, 2001, Peter Farrow, attorney for the Cambridge Center, wrote to Brent Britton (the attorney representing Loebner at the time), expressing exasperation. The center was trying to honor the terms of the gift, he wrote, but Loebner himself was making its job impossible. Time after time the center had had a host and sponsors lined up, only to have them withdraw after they discovered Loebner’s other interests.
“The Science Museum of London … exercised their option to end their contract with the Cambridge Center. In a letter to the Cambridge Center, their spokesperson said that the Loebner Prize didn’t fit with the Museum’s long range plans. Privately and off the record, a representative of the Museum told Dr. Harshbarger that the Museum was concerned about the sexually oriented material on Dr. Loebner’s Web site.”
The letter went on to chronicle similar experiences at the 2002 contest and to cite the case of another potential host for the 2003 Contest (Duke University) that had removed itself from consideration after finding out more about Loebner.
“After lining up sponsors and financial contributions that would fund a well-managed Competition, the sponsors withdrew their support … due to concerns about the material on Dr. Loebner’s site.”
Farrow wrote with considerable delicacy about “the tarnishing of the intellectual image of the Loebner Prize by Dr. Loebner’s other activities that appears to be occurring in the minds of hosts (a problem that probably is accentuated by Dr. Loebner’s personal involvement in administration of the Competition).” In light of these and other considerations, he said, “The Cambridge Center is prepared to return the gift to Dr. Loebner, or to transfer it to a suitable not-for-profit organization he selects.”
However, in the meanwhile, Farrow said, “the Center intends to … administer the Competition for the intellectual and scientific purposes for which it was given, and not simply as part of Dr. Loebner’s personal agenda.”
In a subsequent letter to Loebner dated Jan. 4, 2003, Farrow summarized the tension between Loebner and the center and repeated the offer to give the whole thing back (by this time Britton was out of the picture and Loebner was acting as his own attorney):
“The dilemma requiring resolution is how to enable the Center to manage the Competition free of the obstructions caused by you (some perhaps inadvertently). One solution … is to return the funds and medals to your control either by returning the gift or transferring it as you direct.
“Another solution might be a more cooperative role by yourself which supports, rather than conflicts with, the Center’s role to a net effect of enhancing rather than obstructing the Center’s management. The difficulty I see in this is, the Center has no right to affect how you choose to behave. For example, if the legalization of prostitution is important to you, the Center respects that. However, the deleterious effect of your activities on the Competition remains.
“The current situation is unlikely to be sustainable over the long term…”
Loebner’s response to Farrow’s overtures (summarized in a letter that Loebner shared with me) has been as wacky as anything I put in my nonsensical novel.
The core of his argument is reasonable and, dare I say it, noble: Loebner explicitly makes the connection between Alan Turing on the one hand, and exploited and oppressed workers in the sex trade on the other hand, and he resolutely denies that there is any inappropriate material on his Web site:
“I state for the record: 1. There is no ‘sexual material’ on my Web site. I do have an advocacy position regarding the decriminalization/legalization of prostitution. This is a human rights matter, not a sexual matter, although it does, of course, relate to the human rights of consenting adults to engage in mutually agreeable sexual behavior. I espouse this view for two reasons: Turing’s suicide because of the intolerance of his homosexuality heightened my sensitivity to sexual oppression of minorities, and the persecution of sex workers and their clients is persecution of me and mine.”
Loebner then refutes the idea his opinions have anything to do with the Cambridge Center’s problems.
“2. This advocacy has been on my Web site since, I believe, 1995 or 1996. It was in effect well before the London Museum agreed to host the contest. I believe the main reason that the Museum opted out of the prize is that the main proponent, who initiated contact with me, moved to Australia. My advocacy did not dissuade Flinders University or Dartmouth University from hosting the contest.”
And then he veers into more volatile territory. According to Loebner, the Cambridge Center, which has run this contest, presumably at a loss of money and certainly with no small amount of headache since 1991, owes him $200,000 in damages!
I was to find out that the most thoughtful critiques of the Loebner contest come not from the IBM/ACM camp, or feuding lawyers, but from Loebner participants, especially its winners. They told me that the main problem with the Loebner contest is Hugh Loebner. Ever since the Shieber-Dennett-Minsky defection, each annual contest has been run by a different institution, with a different competition committee that must start from scratch. There is little or no organizational memory to the contest, and much micromanagement by Loebner himself.
Loebner denies that he meddles; he told me repeatedly that his only concern is that the contest be held and that he is happy to empower each contest committee with full authority. However, the stipulation placed on his gift says that the contest rules must be acceptable to him, and he certainly has made his opinions known about them. I was not able to find a single person to agree with him about his role in the contests.
You can’t do too much research into recent Loebner competitions without coming upon the enigma that is Dr. Richard Wallace. He’s known as the founder of the ALICE foundation and the creator of the open-source AIML free software. He’s also known as a seriously odd person. “He’s one stoned hippie,” one person told me. “His ideas are bizarre, even in a universe of bizarre ideas,” somebody else said. “That ALICE guy? He’s a nut. I mean it. A nut.”
Wallace, according to his official biography, is severely mentally ill:
“Richard Wallace is Information Technology Committee Chairman for 350 Divisadero St., a medical cannabis patient services organization. Wallace was diagnosed with bipolar affective disorder in 1992, and became functionally disabled in 1999. He cares for sick and dying patients every day, and provides critically needed technical assistance to the Center.”
So I’ll admit that I put off getting in touch with him and even considered not contacting him at all. I figured that I had already interviewed enough eccentrics for five good articles and needn’t subject myself to any more. But then I read a long and compelling interview with him on the geek news site Slashdot.
In keeping with Wallace’s reputation for eccentricity, the article — which is mostly about A.I. and the Turing test — contains a long and dense discussion of a recent court case that resulted in a restraining order being issued against him at the behest of a former close friend. I found that odd, but his discussion of his ALICE philosophy was cogent and interesting, and it held implications for what the Loebner competition’s continued existence could signify, behind all the ongoing foofooraw.
Wallace’s theory of A.I. is no theory at all. It’s not that he doesn’t believe in artificial intelligence, per se; rather, he doesn’t much believe in intelligence, period. In a way that oddly befits a contest sponsored by a bunch of Skinnerians, Wallace’s ALICE program is based strictly on a stimulus-response model. You type something in, if the program recognizes what you typed, it picks a clever, appropriate, “canned” answer.
There is no representation of knowledge, no common-sense reasoning, no inference engine to mimic human thought. Just a very long list of canned answers, from which it picks the best option. Basically, it’s Eliza on steroids.
Conversations with ALICE are “stateless”; that is, the program doesn’t remember what you say from one conversational exchange to the next. Basically it’s not listening to a word you say, it’s not learning a thing about you, and it has no idea what any of its own utterances mean. It’s merely a machine designed to formulate answers that will keep you talking. And this strategy works, Wallace says, because that’s what people are: mindless robots who don’t listen to each other but merely regurgitate canned answers.
I reached Wallace while he was staying with friends in the Netherlands. There was loud techno music playing in the background as we spoke, but he himself was very soft-spoken, polite, funny, and friendly — even further from Hugh Loebner than Dwight Harshberger of the Cambridge Center.
I asked him where he got the inspiration for ALICE. He said that he had been influenced by the “minimalist” A.I. ideas associated with Dr. Rodney Brooks of MIT’s A.I. lab.
At first, he said, he had tried to follow some of the more grandiose theories of traditional A.I., but he found them sterile. “You read a book with a title like ‘Consciousness Explained,’” he said, “and you expect to find some kind of instruction manual, something that you can use to build a consciousness. But of course it’s nothing of the kind.” (Daniel Dennett wrote “Consciousness Explained.”)
Well, I asked him, what was his explanation of consciousness? He said he did not have a theory, other than that maybe there was no such thing as consciousness in the first place. Maybe it was just a word, a social construction. But, I objected, I certainly perceive myself as conscious in talking with you. Don’t you feel conscious talking with me? Yes, he said, but maybe that was just the robot’s way of handling unfamiliar data.
We talked a little more about what it means to be human; he was very modest on the subject. Finally I asked him about the Loebner Prize, and in particular, about Loebner’s insistence that the competition be held every year, in the face of arguments from people like Shieber and Dennett that it be not be an annual thing. “Well,” Dr. Wallace said, “the annual Loebner Prize certainly motivated me.”
And then we said goodbye.
So what does it mean to be human, anyway? What does it mean to have a mind?
Minsky has written books on what it means to have a mind. So has Dennett. Likewise Hofstadter, Searle, Newell, even Turing. But Wallace doesn’t care about such things, and neither, for that matter, does Loebner.
So I asked Tracy Quan about it. She’s a writer and former sex worker who has also been a Loebner competition “confederate” (that is, a human respondent to judges’ questions), and subsequently a Loebner competition judge.
Tracy and I chatted for quite a while about bots. Bots were amusing, she said. They were stupid, and yet she liked chatting with them because they were good for her vanity. They seemed so interested in everything she said, and were always willing to talk to her.
The overall sense I got from her was that she thought chatterbots were about as interesting as goldfish. Which are, you know, pretty interesting if you’re in the right mood for watching goldfish, but really not the kind of thing about which one would write whole books of philosophy. And then she said, “I’m a relationship person. I don’t care how the chatterbots work, I just care about my relationship with them. There was this one bot, I think his name was Fred. He was always so complimentary! He was like a flattering boyfriend. We had a very nice relationship.”
And then I asked her if she found the idea of artificial intelligence philosophically troubling, in the sense that someday one of these A.I.’s might become more intelligent, wise, funny … whatever, somehow more human, than any of us.
“What?” she said. “No, you’re joking.” I said, no, I wasn’t. There was a pause. And then she laughed and laughed and laughed.
She was right, of course. Spend a few minutes chatting with even the best of the bots, and you will cease to be threatened by their imminent eclipsing of humanity. Their performance is, in Loebner’s own word, gruesome. So I felt pretty silly about all the deep anguish to which I had subjected myself on that score.
And yet, a few weeks later I happened to read the first page of Tracy’s novel, “Diary of a Manhattan Call Girl,” in which the protagonist, “Nancy,” confesses to her diary an embarrassing incident in which she had been found out faking arousal in order to stimulate her client. She was reciting a canned speech used countless times before, but evidently her client, “Howard,” knew and didn’t mind.
And that was the point. Nancy wasn’t so much having a real conversation with Howard as she was engaging in stateless verbal behavior with him, just as an ALICEbot might. “Many people are interested in what happens when you talk to someone. Your behavior when you are speaking is called verbal behavior, and the behavior of the person or persons listening to you (if they respond in some way to what you’ve said) is called verbally governed behavior.” In Howard’s case the verbally governed behavior was orgasm.
When you look at it this way you can see that although Wallace’s theories of our non-consciousness may be hard to credit, there are bound to be enormous economic benefits to his approach to the Turing test — just as soon as Nancy’s repertoire of canned sexual responses have been typed into the ALICE brain, which they may well be by now.
That’s why porn is one of the beckoning frontiers of stimulus-response-style A.I., along with video gaming. Even without much A.I. technology, gaming is a bigger business than Hollywood. Imagine what could come next: A.I.’s that act more or less like goldfish-humans in video games or on porn sites will engage lots of people’s interest and earn scads and scads of money.
Even before they pass the Turing test, in other words, chatterbots will become economically significant by evoking the desired “verbally governed behavior.” And it seems likely to me that some program based on the plodding, tortoise-like strategy of the bots will pass the Turing test before any sophisticated hare of a self-aware program based on “a truly competent language-comprehension system” from “the world’s best A.I. labs.”
If Wallace is right, the first “intelligent” machine according to Turing’s criterion will indeed be as dumb as a bag of hammers. It will win the prize without ever learning to parse pronouns or deal creatively with enthymemes.
Hugh Loebner likes to compare himself to King Lear, “more sinned against than sinning.” All he wanted to do was to give away a fortune in order to hasten the day when human toil would be abolished and people could devote their lives to pleasure, and look where it’s gotten him. It’s taken years off his life, made him the object of ridicule, and cost him a fortune. And now, on top of everything else, he’s probably going to have to take the Cambridge Center to court. “Litigation is likely,” he told me.
But why sue, I asked him, when the Cambridge Center clearly wants nothing more than to give you back your money and your prize and get out of its hair?
“Generally, the Center has been making noises about being willing to return my money,” Loebner replied. “I have not yet responded to the last letter from Farrow (I am composing a reply which will include much of what follows), but I will tell them that I will not simply accept my $125k back. If they want to return the money, I want $375K — $125K for the prize money, $250K to cover the 13+ years of expenses, time and effort, lost interest, as well as the costs of establishing an alternative foundation to oversee the prize.”
To me this seems quixotic. As the Cambridge Center’s attorney Farrow put it, “the legal context of the Loebner Prize is one of a completed gift, not an ongoing contract.” I cannot imagine Loebner prevailing in court. Certainly if I were on the jury I would say, “Give him his money back. The end.”
Indeed it is hard for me to think of Hugh Loebner as King Lear, but it is easy to think of him as Don Quixote. The real Don Quixote, of course, the man in the novel, not in the sentimentalized TV renditions, was not exactly harmless. Don Quixote’s disputatiousness was not quaint; he sometimes beat people — usually friends or innocent bystanders — to within an inch of their lives. Likewise, Loebner’s most abused victims are usually the ones most sympathetic to his contest.
Here’s a typical story, this one from Robby Garner, a two-time winner of the bronze Loebner Prize and a member of the 2003 competition committee.
“The company which hosted the 2002 contest is called the Institute of Mimetic Sciences. We initially wanted to carry out the competition live on the Internet, and CCBS was in agreement with us that it would make the most sense since the overwhelming majority of chatterbots now are Web-based software. I was given the task of asking Hugh about it. Hugh went ballistic when I did, and kept repeating his objections to me on the phone for about 15 awkward minutes. After that call, IMS and CCBS came to a compromise that we would still admit Web-based software, but they would have to run ‘on site.’”
Don Quixote had a hard time dealing with “consensus reality,” and so does Hugh Loebner. Certainly his advocacy of the rights of sex workers and their clients has made him unacceptable to a certain class of sponsors. But I’m not talking about that. That particular political stance, it seems to me, is not standing between Loebner and the kind of success that he wishes for his prize.
Rather it’s his willful ignorance of the very technology he’s trying to promote — and the way he insists on micromanaging a contest that clearly would be better off if he were nowhere near it — that threaten the continuation of the very thing to which he’s devoted so much of his life’s work.
I myself, in a prior life, managed a usability test when Sun Microsystems switched its underlying Unix technology. Organizing that test was comparable in scope to what should have happened, and didn’t happen, in Atlanta last year. It took me four months working full time — and I had the resources of a world-leading technology company upon which to rely. I know a thing or two about the logistics of these things. When Loebner told me that he could manage the competition himself, from his apartment, part time, I knew that I was talking to a man who was “going through life both very smart and very ignorant.”
Don Quixote, of course, is beloved because his adventures are funny, and there is the funny side of Loebner’s bluster, too: his way of provoking pie fights wherever he goes.
“I was caught in a bitch fight between Loebner and Minsky,” recalled Neil Bishop. “We wanted to recognize Minsky for his work in the field on decision sciences. We know of the past baggage between the two, so I contacted Minsky to request permission to do so. I think he was flattered in some weird way by this request and ultimately gave us permission but not before blasting me for working with Loebner and wanting me to pass on to Loebner that Minsky would be contacting his lawyer to begin a libel and defamation action if his name was not removed from Loebner.net immediately.”
I think that’s too sweet for words. Not only that, but Bishop himself proposes to join the pie fight:
“Anyway, here is a tidbit for you. We are presently working to put together our own Turing event that will embrace the integrity of the first event that Professor Minsky competed in during the ’70s. The prize? The Minsky Award for Intelligent Systems. The winner will receive a grant to support their research. Many details have be already accomplished to bring this about. If you want to stir things up, you can put this in your article.”
When you consider that that Bishop organized the most recent Loebner Prize, you can get some kind of idea just how badly the 2002 contest went.
Richard Wallace seemed almost glad that he didn’t win last year’s contest. He was among several people who told me about Loebner’s forceful statement — it went on for a while, evidently — that Ariel Sharon, and not Osama bin Laden, was behind the attacks of Sept. 11. In the memories of many of last year’s Loebner contest participants, the bin Laden incident stands out more than anything having to do with Alan Turing.
When you add that to all the other silliness, what you come up with is an event far from dignified. I wasn’t there, but I get the impression of a swirling chaos with enough vanity and stateless conversations to make artificial chatterbots totally redundant. The more you look at the actual event, the more apt seems Minsky’s phrase “obnoxious and unproductive annual publicity campaign.”
“I’m not used to being perceived as the most sober participant,” Wallace told me, sounding apologetic.
And yet for all this meshugas, I find a nobility in Hugh Loebner. I respect his standing up for prostitutes, surely a devalued class of persons, and I applaud his making explicit the parallels between their persecution and Turing’s. I admire the way he has welcomed all comers regardless of pedigree, the way he has stuck to the common sense of the Turing test in repudiation of those who would make it an exercise in “school figures”; I salute the meritocracy he has championed of hackers, free thinkers, eccentrics, and cheezo hobbyists. I think he was right to stick to his guns and insist that the competition be held, and a prize awarded, every year.
And I like his joie de vivre and his ability to laugh at himself (a little of which trait goes a long way, I might cautiously mention to Drs. Minsky and Dennett). For all his talent to drive one to distraction, Hugh Loebner, self-aggrandizing fool though he may be, set out on this enterprise, as did Don Quixote, to help us find something better within us.
In the conclusion to his response to Shieber, Loebner wrote,
“There is a nobility in this endeavor. If we humans can succeed in developing an artificial intellect it will be a measure of the scope of our intellect … I suggest Loebner’s Corollary to Asimov’s Laws of Robotics: ‘Humans are gods.’”
But I think the last word must go to Dr. Wallace, who improbably enough is perhaps the closest thing in this tale to a Sancho Panza.
“And remember,” Wallace wrote in his Slashdot interview, “no one has proved that our intelligence is a successful adaptation, over the long term. It remains to be seen if the human brain is powerful enough to solve the problems it has created.”