Overdevelopment, Overpopulation, Overshoot
Container City: Shipping containers, indispensable tool of the globalized consumer economy, reflect the skyline in Singapore, one of the world’s busiest ports.
All Hugh Loebner wanted to do was become world famous, eliminate all human toil, and get laid a lot. And he was willing to put up lots of good money to do so. He’s a generous, fun-loving soul who likes to laugh, especially at himself. So why does everybody dislike him so much? Why does everybody give him such a hard time?
Actually, not everybody does dislike him. He is beloved among sex workers, of whose rights he is a tireless advocate. Loebner also has friends, or at least people willing to hang out with him for short intervals, among the eccentric group of self-tutored hackers and robot builders who participate in the annual competition for the Loebner Prize in artificial intelligence.
Since 1989 Loebner has spent, by his account, more than $200,000 and a thousand hours of unpaid time to hasten the arrival of intelligent machines. He has set aside a gold medal and $100,000 in cash for the creator of the first machine that can pass for human. In the meantime he gives out annual prizes for programs that come closest to a long-sought holy grail in the artificial intelligence community: passing the Turing test.
But Hugh Gene Loebner, a fast-talking hardware manufacturer who has a distracted air, a Ph.D. in sociology, and an intense devotion to what he calls WWS (wine, women and song), is assiduously avoided by virtually everybody who has helped him organize his contests over the past dozen years or so. He is considered pushy and unpleasant by some of his biggest fans. And he is anathema to the self-proclaimed leading lights of “real” A.I., who loathe Hugh Loebner with a passion that borders, ironically, on the irrational.
For example, MIT professor Marvin Minsky — known by his disciples as the father of artificial intelligence — calls Loebner’s prize “obnoxious and stupid” and has offered a cash award of his own to anybody who can persuade Loebner to abolish his prize and go back to minding his own business. The mere mention of Loebner’s name is sometimes enough to get the father of artificial intelligence talking about lawsuits.
It’s easy to understand why contest organizers don’t have much good to say about Dr. Loebner — in their experience, he’s a control freak and congenital, chronic pain in the ass. People say that the more you go out of your way to do Hugh Loebner a favor, the more he treats you like hired help. Even participants in past Loebner contests say similar things. But what can account for this passion of the academic A.I. community, for whose benefit Loebner took out a mortgage on his house to endow the $100,000 grand prize? All he wants to do is give them his money for work they were going to do anyway!
To win the Loebner competition, software programs must mimic human conversation. Such programs are known as “chatting robots” or, more often, “chatterbots” or simply “bots.” But today’s academic A.I. researchers consider the chatterbot approach simpleminded. The Loebner competition, they argue, isn’t a real measure of progress in artificial intelligence but merely a “bot beauty contest.” To mainstream researchers, Loebner is a self-aggrandizing fool and his contest is hokum: at best irrelevant and at worst a public disservice that encourages bad science.
Loebner contests are often farcical and Hugh Loebner does act foolishly. But the closer one looks at the history of the Loebner Prize, the more it appears that Loebner’s real offense was showing up the biggest stars in “real” artificial intelligence as a bunch of phonies. Thirty years ago, Minsky and other A.I. researchers were declaring that the problem of artificial intelligence would be solved in less than a decade. But they were wrong, and every year the failure of computer programs to get anywhere close to winning the Loebner Prize underlines just how spectacularly off the mark they were.
Alan Turing was the British mathematician, cryptographer and prototypical computer scientist who, some say, did as much as Winston Churchill to save Western civilization from the Nazis. He was also perhaps the most influential thinker of the 20th century — in the sense that his ideas are the foundation upon which is built computer science and thus our entire digital world. He started the modern discussion about minds, machines and artificial intelligence. Turing was gay and did not hide it; for this he was shunned by his peers and forced by the government to undergo hormone “therapy.” He killed himself in 1954, and it is widely believed that he did so to escape persecution over his sexual preference.
The Turing test is the canonical benchmark by which we humans will know that computers have caught up with us in the smarts department. In 1950 — barely two years after the construction of the first machine that could reasonably be called a computer — Alan Turing proposed a simple test to determine whether machines could think. If you were conversing with an entity and you could not tell whether that entity was human or merely human-made, then whatever you were conversing with was at least as intelligent as you were. Turing predicted that computers would pass this test by the year 2000.
Long known to historians of the computer, the Turing test emerged from obscurity and became part of popular culture in 1966, when Joseph Weizenbaum’s simple 200-line Eliza program, which used a few simple tricks to generate bland responses to human-posed questions, fooled people into thinking they were conversing with an intelligent being.
Although the challenge was at first embraced by the academic A.I. community, passing the Turing test — which proved to be a rather more difficult nut to crack than some prominent A.I. people had said it would be — has long since fallen out of fashion as a legitimate goal or benchmark among “real” A.I. researchers. The A.I. establishment has for more than a decade put more energy into explaining why the Turing test is irrelevant than it has into passing it.
The Loebner competition is designed to provide a real-world opportunity for bots to pass the Turing test. And even though the academics are loudly boycotting the Loebner competition today, that doesn’t mean it wants for entrants.
There are three categories in the Loebner Prize. The grand prize of $100,000 and a gold medal bearing the likeness of Hugh Loebner will go to the first non-human entity that passes what Loebner calls the “full” Turing test (although the criteria for what constitutes “full” are a little vague.) A silver medallion and $25,000 will go to the winner of a somewhat more rigorously defined “limited” Turing test. And a bronze medallion and $2,000 are awarded annually to whichever program is judged best according to the rules in force at that time.
Contestants have included two-time winner Robby Garner, a self-taught computer programmer and robot builder; Benji Adams, the creator of the Web site Personality Forge, home of hundreds of chatterbot “personalities”; and perhaps the most famous (or infamous) of them all, two-time winner Dr. Richard Wallace, founder of the ALICE Foundation. Dr. Wallace, who, by his own admission suffers from a debilitating mental illness — which he mitigates with daily doses of medicinal marijuana — is known not only for his unorthodox theories about consciousness and intelligence but also for the restraining order that keeps him off the University of California campus in Berkeley. Something about the Loebner Prize seems to draw eccentrics out of the woodwork, and chaos itself is the very essence of the annual ritual.
The 2002 contest, hosted in Atlanta by the Institute of Mimetic Science, had the most participants ever. More than 40 bots were entered, from which, by some apparently occult process, a final field of eight was selected.
The event itself, by most accounts, was conducted with all the dignity and pomp of the finale of the Marx Brothers’ “Duck Soup,” although it perhaps lacked some of Freedonia’s essential seriousness. Every person with whom I spoke about it said that last year’s contest was an utter fiasco, with unclear rules, inconsistent judging, arbitrary fiats by an opaque prize committee, petulant prima donnas, and last-minute changes of venue that prevented most entrants from even discovering where the contest was taking place until after it had happened.
There are strongly divergent views about the cause of those problems, and in asking some of the principals a few questions about it I soon felt that I had set myself up as Judge Wapner, with each of the aggrieved parties anxious to get their story out to the world. Benji Adams was the most forthcoming; his 26-page memo, in its righteousness and obsessive attention to every slight — replete with chronologies and an e-mail trail documenting the alleged incompetence or malfeasance of the Loebner competition committee — reminded me of nothing so much as Kenneth Starr’s report to Congress.
By tradition, three things happen at the conclusion of every Loebner contest: The winners take their prizes and run for the nearest exit, Hugh Loebner basks in glory, and the hosting organization takes a solemn oath: “Never again.”
Over the 11 years of its existence the Loebner competition has taken place in nine locations, including the Boston Computer Museum, the London Museum of Science, Dartmouth College, Flinders University of South Australia and, during a particularly dark period in the prize’s short history, in the billiard room of the private Salmagundi Club in New York City, of which Loebner is a member.
Despite the differing locations, the contest has always been conducted under the auspices of the Cambridge Center for Behavioral Studies, an obscure Massachusetts-based nonprofit that apparently doesn’t do much beyond running the Loebner competition and yet, as we’ll see, would dearly love to have nothing to do with the contest at all.
There’s no waiting list of prestigious institutions clamoring for the honor of physically hosting the contest these days. Rather, the Loebner competition has proven so stressful to its host that it’s seldom invited back, and each year the Cambridge Center must scramble to find a new home for it, as a social service agency might scramble to find a foster home for a troublesome adolescent. Three years ago the Cambridge Center seemed to have found a permanent home for its problem child: With great fanfare, the London Museum of Science and the Cambridge Center jointly announced that the museum would be home to the Loebner competition for the next 44 years. After one dose of the Loebner competition, however, the museum had changed its mind and annulled the agreement. Once again the Cambridge Center was shopping for a suitable foster home.
Every year finding another institution to host the Loebner competition becomes a little harder; “selling” the Loebner has become, for the Cambridge Center, a job akin to selling the Brooklyn Bridge. And yet each year, one way or another, Loebner himself has made sure that the show goes on.
Today, however, Hugh Loebner is on the brink of initiating legal action that may finally do what the combined efforts of his detractors have been unable to do for a decade, which is to make him take his money and go home. For years Loebner and the Cambridge Center for Behavioral Studies have been stuck in a dysfunctional marriage of sorts. But now Loebner and the center are flirting with divorce, and it may be a messy one. If they do wind up in court, which Loebner tells me is likely, it’s hard to see how the prize will survive.
In fact, although the University of Surrey (U.K.) has announced that it will host the 2003 competition next October, there’s always a possibility that it will back out. The last Loebner bronze medallion may already have been won, and the silver and gold medallions may wind up for sale on eBay before anyone has earned them.
It wasn’t always like this.
When Hugh Loebner created the Loebner Prize in 1989 to spur progress toward technology that could pass the Turing test, the A.I. establishment welcomed him with open arms. Under the aegis of the Cambridge Center, a blue-ribbon panel chaired by cognitive scientist and author Daniel Dennett and composed of a who’s who of computer scientists from leading institutions, organized the first contest and wrote the first rules. When, after two years of planning, the first event was held in 1991 at Boston’s Computer Museum, it was a gala affair partially underwritten by the National Science Foundation and Sloan Foundation.
Hype had been building for months, and when the actual contest finally took place, reporters from the popular and scientific press thronged the hall. Robert Epstein, director emeritus of the Cambridge Center and a former subordinate of Loebner’s at University of Maryland in Baltimore County (UMBC) acted as master of ceremonies. When you consider that Loebner’s degrees are in sociology, not computer or cognitive science, and that before his return to the family business his only employment in the groves of academe was as an assistant director of the statistics center at UMBC, you see what a stunning coup he had pulled off. It was as if the goofiest nerd in the entire high school had wormed his way into the “A” clique, the one led by the rich, handsome quarterback and the beauty queen/state tennis champ.
There was only one problem: the A.I. programs entered in the contest performed horribly. They were pathetic. The mountain, as it were, had gone into labor and given birth to a mouse.
“Perhaps the most conspicuous characteristic of the six computer programs was their poor performance.” This damning summary of the first event appears in the article “Lessons From a Restricted Turing Test,” by Harvard professor Stuart Shieber, in the Communications of the Association for Computing Machinery (CACM), the house organ for the computer science establishment. The winning program, by Joseph Weintraub, was a very simple variant of Joseph Weizenbaum’s 25-year-old Eliza program. Worse, the article reported, “Dr. Epstein, in a speech after the event, noted that he had learned from the day’s proceedings that ‘little progress has been made in the last twenty-five years.’”
It had to be said, and Epstein said it: The emperor had no clothes. After decades of government-funded research by the brightest minds in computer science, A.I. programs still stank, and the National Science Foundation and Sloan Foundation had just spent $80,000 to demonstrate this sad fact to the world. Now what?
The answer, according to Shieber, was obvious: stop running the contest! The world wasn’t ready for a Loebner Prize, he said, so having one was counterproductive.
The wheels of the ACM grind exceedingly fine, but they grind slowly, and Shieber’s report on the 1991 Loebner contest did not appear until the spring of 1994. In the meanwhile Dennett’s committee gamely soldiered on and conducted two more contests (attended by considerably less hype and similarly unimpressive results). But once Shieber’s article appeared, the world had changed. The Party had spoken, saying, in effect, “Loebner is making us look bad. Make him go away (but see if you can find a way to keep the money).” Dennett and other members of the committee tried to get Loebner to change his contest along lines suggested by Shieber, but he refused. Shortly thereafter Dennett and his committee resigned en masse, angry at Loebner’s intransigence and generally sick of dealing with him. Dennett is still bitter about this episode.
Actually, the themes “angry at Loebner” and “generally sick of dealing with him,” came up a lot in my research for this article, although not many people were willing to use those phrases for attribution. For example, I discovered that sometime before Dennett’s resignation, Loebner and his old friend Epstein had ceased to be on speaking terms. Epstein was the first of many people associated with the contest who are no longer on speaking terms with Hugh Loebner. Even his friends and fans find him, well, hard to take in anything other than small doses. In the words of Robby Garner — a two-time Loebner Bronze winner, and currently a member of the 2003 contest committee, “Loebner was criticized early on for his self-aggrandizement, and he has hardly disappointed any of those detractors if they have ever spent any time or shared a meal with the man.”
Who is this man who seems to attract controversy the way picnics attract ants?
- – - – - – - – - –
I first heard of Hugh Loebner from a friend of mine who had attended a conference in the fall of 2002 for users of the Lisp programming language, the language favored for most A.I. work. He told me about Loebner, about his prize, and about what it was like to have a conversation with him. “Loebner’s one of those guys who’s always trying to catch up with his own thoughts,” my friend said. “You can almost see him running after them.” And then he added, “It’s not clear whether going through life both very intelligent and very ignorant is a wise strategy, but that seems to be the one he has chosen.”
I was intrigued by my friend’s story about the Loebner competition, because last August I published a novella, called “Cheap Complex Devices,” that purported to comprise the chronicle of, and the results from, a storytelling competition between researchers vying for the inaugural “Hofstadter Prize for Machine-Written Narrative.” The contest was named after Douglas Hofstadter, the Peter Pan-ish author of the Pulitzer Prize-winning favorite book of every freshman student of cognitive science: “Gödel, Escher, Bach: An Eternal Golden Braid.”
Like Hofstadter’s, my book is about minds and machines, and it is basically incomprehensible (at least to me), but unlike his book, mine at least has skulduggery, sex, egomaniacal and possibly delusional computer scientists, pathos, a bitterly feuding prize committee, and a whore with a heart of gold. I really like my book, and I’m glad I wrote it. So I am quite happy that I had never heard of Hugh Loebner, or of the Loebner Prize for Artificial Intelligence, before I undertook to write “Cheap Complex Devices,” for had I known then what I know now, I never would have bothered. The story had already been done!
Among other things, “Cheap Complex Devices” is a lampoon of the technology cult of MIT’s A.I. Laboratory (co-founded by Marvin Minsky) and Media Lab (with which Minsky is closely associated) and places like them. Being a techno-paranoid humanist, I find techno-utopian trans-humanists of the Minsky ilk an affront. I desire to live in a world without future shock, and they want to shock me every chance they get.
I loathe “progress”; they worship it. But, paradoxically, I find myself annoyed by the way the A.I. elite has, for decades, promised far more than it has delivered. So I’m annoyed with self-promoting A.I. visionaries for moving both too fast and too slow. Or maybe I’m just touchy because as a member of the usability engineering group at Sun Microsystems in the early ’90s I was a frequent participant at conferences where Media Lab triumphalism was as predictable as sunrise. In any event I felt like giving them a pie in the face.
Which is to say that although Loebner wants to live in the 22nd century and I want to live in the 19th, both of us seem to have a thing about tweaking the noses of the A.I. establishment. I was predisposed to like the fellow.
I checked out Loebner’s Web site. It was cheesy, poorly laid out, and surprisingly light in content. Somehow I had expected more pizzazz from somebody aggressively pushing the limits of computer intelligence. I had not known, before visiting his site, that his favorite political causes are the legalization of prostitution and setting the record straight about the Olympics’ so-called gold medals. (They’re actually gold-plated silver, it turns out, so they should be called “gilt” medals.)
I found myself in agreement with Loebner on both of those issues, and I respected his being an “out and proud” john, that is, someone who pays for sex with prostitutes. On the other hand, being neither an Olympic athlete nor a patron of sex workers, I didn’t find myself especially worked up about either topic. I was about to surf someplace else and forget all about this self-proclaimed gadfly. But then I read about his spat with Marvin Minsky, and I knew I had to talk to this guy.
I first spoke with Loebner last November. I wanted to find out about his prize, but mostly I wanted to ask him about his public virtual spitball fight with Minsky.
I was a little nervous in calling him, because being a reporter doesn’t come easy to me — I’ve never been good at taking notes. I was a little taken aback, then, that he didn’t remember who I was or that we had set up an appointment, and that he was evidently taking my call on the manufacturing floor of his company, Crown Industries of East Orange, N.J. — a maker of stanchions and ropes (used by banks to keep their customers in line), brass rails and fittings, bellman’s carts, pedestal tables, folding table legs, and roll-up illuminated disco floors. People were loudly speaking in the background and things were clanging, and a couple of times it sounded as though he had put his hand over the phone and was giving directives while I was speaking to him. Several times he forgot what I had asked him. Like King George, he tended to say “What? What?” I would repeat my question and he would say, “Oh yes. Oh yes. Minsky. Quite the story. Quite.” So much for the Bill Moyers thing.
Loebner talks very, very fast. And he doesn’t seem to pay much attention to what you’re saying. Nevertheless I found him pleasant and interesting. He speaks with the candor of a child, and sometimes uses a child’s language. For example, I asked him how he had decided to put up such a large sum of money to endow his prize.
“I wanted to draw attention to the Turing test,” he said. “I thought it was about time that somebody did. Daddy didn’t want to, but Daddy died. So I put up the money.”
That’s one of the few verbatim quotes I have from our first conversation. My note-taking skills were no match for his output.
Loebner has many enthusiasms. He likes prostitutes. He likes marijuana. He likes pornography. He likes the Loebner Prize. He likes wine and fine paintings. And he likes Hugh Loebner. He spoke enthusiastically about all those things. But I was surprised to learn that he didn’t seem to care much about artificial intelligence, per se.
His interest in the field has always been pragmatic, he told me, never philosophical. He’s a hedonist who thinks work is an abomination and sloth is our greatest virtue. He got interested in A.I. because he hoped the day would come when robots and A.I.’s could do all the work and people could play all the time. I asked him about the current state of A.I. research. Well, he said, some progress was evidently being made in the area of “decision support” systems, but that didn’t really interest him too much. What about the quality of entrants in his competition? “Pretty gruesome,” he said. “Gruesome.”
I was surprised to learn that he had not even read the transcripts of the prior year’s competition. He didn’t very much care for chatting with the bots himself; they were too stupid.
I was also surprised at how open Loebner was about his emotions. He laughed a lot and yet he told me, after we had spoken for perhaps a minute, that he was deeply depressed about his prize, that he had not slept at all for the previous two nights from worrying about it. He spoke to me, after knowing me for 60 seconds by phone, in a way that I would only speak to my closest confidants. He was in dark despair, he told me.
What was troubling him was that it was November 2002 and the Cambridge Center had not yet announced the date and venue for the 2003 contest. Loebner had just instructed his attorney to tell the center that if it did not make an announcement by Jan. 2, 2003, he would sue to get his money back and thereafter administer the prize himself. He seemed genuinely anguished by the prospect of going to court, but resolute.
When he had made the gift to the Cambridge Center he attached only three strings, he said: that the Loebner contest be conducted every year, that a prize be given every year, as long as there was at least one entrant, and that the rules by which the contest was run be acceptable to Loebner and to the Cambridge Center. Now he was suspicious that the center, because it had not announced the next year’s competition, was leaving open the possibility that no contest would be held. This prospect drove him nuts. As far as he was concerned, if the Cambridge Center could not find a suitable organization to conduct the contest, then the center’s staff should conduct it themselves.
The job of organizing and conducting the contest was trivial, he said. “It’s a very simple concept. One person could do it without headwind.” If it so transpired that organizing the contest fell to him, he said, he would probably just conduct it in his New York City apartment. The important thing was that the contest happen, not where it happened or who sponsored it.
If one thing makes Loebner see red, it’s the idea that his contest will not be held every year. “On this point I have been stalwart,” he said. “I have been adamant. I have been steadfast.” And then he offered to send me copies of recent and pending letters between his lawyers and the center’s.
The more he told me of his simmering argument with the Cambridge Center, the more I felt myself being handed a mini Mike Wallace-style story, which made me uncomfortable. “Why are you telling me all this?” I asked him. “I’m a reporter. Do you really want me to write a story ‘Eccentric Philanthropist Set to Sue His Own Contest’? Won’t that just make matters worse for you?”
“Sure,” he said. “But go ahead. Write it. Stir things up. Let’s see what happens.”
And then we talked about the whole Minsky episode.
It’s instructive to take a closer look at Shieber’s 1994 CACM article. Shieber goes on at great length about just what a stupid idea the whole Loebner Prize was and then proceeds to lecture Loebner about how to spend his money:
“Given that the Loebner Prize, as constituted, is at best a diversion of effort and attention and at worst a disparagement of the scientific community, what might a better alternative use of Dr. Loebner’s largesse be? …. In order to prevent degrading of the imprimatur of the reconstructed Loebner Prize, it would be awarded on an occasional basis, only when a sufficiently deserving new result, idea, or development presented itself.”
Shieber was well aware that Loebner had, in making his gift to the Cambridge Center, set three stipulations. Shieber derisively dismissed all three. He suggested instead that Loebner give his money to a commission of experts (such as Stuart Shieber) in order that they might give it out whenever they felt like it, according to criteria set by experts — without any meddling from disco-floor makers or other amateurs.
Never a shy violet, Loebner wrote a response, published in the same issue, taking Shieber head-on.
“Shieber would like to tell me how I should spend my money. He suggests alternative prizes for my ‘largesse.’ In my letter of December 30, 1988 to Dr. Robert Epstein, wherein I authorized Dr. Epstein to move forward with the contest, and referring to the Turing Test, I concluded with these words: ‘Robert, in years to come, there may be richer prizes, and more prestigious contests, but gads, this will always be the oldest’” Well, one out of three isn’t bad. I was aware when I penned those words that I had no patent on prizes, and that opportunities to reward advances in A.I. were not barred to others. I look forward with great anticipation to The Shieber Prize.”
He went on, a bit later, to give some of his reasons for initiating the prize:
“My primary purpose was to develop the Turing Test itself. By the time I thought of my prize, Turing’s article proposing the test was decades old. A.I. scientists and philosophers regularly discussed the test, yet no one had taken steps to implement it … The initial Loebner Prize contest was the first time that the Turing Test had ever been formally tried. This in itself justified the endeavor. It also introduced the Turing Test to a wide public, and stimulated interest in it. To my knowledge, no one had asked, let alone answered, the many important questions about the Turing Test which must eventually be solved.”
Then Loebner rebutted Shieber’s contention that the contest was premature and gave his rationale for insisting on an annual contest, concluding: “I am not worried that the winning entries in early years are primitive. Their inadequacies are incentives for others to enter the contest.”
In going through the ACM online archives, I was amused to see contemporary commentary to the effect that Shieber had hit the nail on the head and that silly Hugh Loebner just didn’t get it. This struck me as odd, because to my reading Shieber’s analysis and recommendations seem snide and puerile, whereas Loebner’s seem cogent and, what’s more, playful. “My reaction to intelligence is the same as my reaction to pornography,” he wrote. “I can’t define it but I like it when I see it.”
In 1995, about a year after the publication of Shieber’s article, Marvin Minsky, the father of artificial intelligence, posted a notice on the comp.ai and comp.ai.philosophy Usenet newsgroups. In it he drew attention to a clause in the Loebner contest rules to the effect that using the term “Loebner Competition” without permission could result in a revocation of the prize.
Minsky wrote, “I do hope that someone will volunteer to violate this proscription so that Mr. Loebner will indeed revoke his stupid prize, save himself some money, and spare us the horror of this obnoxious and unproductive annual publicity campaign. In fact, I hereby offer the $100.00 Minsky prize to the first person who gets Loebner to do this. I will explain the details of the rules for the new prize as soon as it is awarded, except that, in the meantime, anyone is free to use the name “Minsky Loebner Prize Revocation Prize” in any advertising they like, without any licensing fee.”
(Minsky did not respond to e-mails requesting an interview.)
If the CACM article marked Loebner’s fall from grace, the Minsky note on comp.ai marked his utter banishment into the wilds of A.I. quackery.
Can you imagine, for example, being a graduate student in computer science at a big-name school in 1996 and telling your major professor that your goal was to win the Loebner? Loebner was more “out” than Liberace.
But Loebner did not take his snubbing meekly. Loebner immediately wrote back that the best way for Minsky to get Loebner to revoke his prize was to win it. Of course Minsky had already hinted that Loebner had never made clear what the rules for winning the prize were, so that was not a very satisfactory rejoinder. But then a few days later (“while taking a nice hot bath, drinking a fine wine, about an hour after smoking a really fat joint”), Loebner came up with a more considered and clever response, one that still rattles Minsky nearly a decade later.
Minsky had announced that he would give $100 to whoever made Loebner stop his contest. But Loebner would only stop his contest when somebody won the gold medal. Therefore, Loebner reasoned, Minsky, being an honorable man, would give $100 to whoever won the ultimate Loebner competition. Therefore, Marvin Minsky was a cosponsor of the Loebner competition, simple as that. It was delicious!
Loebner promptly issued a press release saying that Marvin Minsky was now a cosponsor of the Loebner Prize, by virtue of his announcement of the “Minsky Loebner Prize Revocation Prize.” What made this development so delightfully ironic was Minsky’s own statement that anyone was free to use the name “Minsky Loebner Prize Revocation Prize” in any advertising they liked, which made it nearly impossible for Minsky to prevent Loebner from doing just that. Which is why Loebner continues to cite Minsky as a cosponsor of his event every chance he gets.
The image that comes to my mind whenever I think of this development is from the sublime cartoons of the late, great Chuck Jones, with Hugh Loebner in the role of Bugs Bunny, and Marvin Minsky, the father of artificial intelligence, in the role of Yosemite Sam, stamping his feet, with smoke coming from his ears. In fact, Minsky is still listed as a cosponsor of Loebner’s prize on the Web site, and, as we’ll see, Minsky is still stamping his feet.
End of Part 1. Read Part 2.
This story has been corrected since it was first published.
Man Covering His Mouth: A shepherd by the Yellow River cannot stand the smell, Inner Mongolia, China
Angry Crowd: People jostle for food relief distribution following the 2010 earthquake in Haiti
“Black Friday” Shoppers: Aggressive bargain hunters push through the front doors of the Boise Towne Square mall as they are opened at 1 a.m. Friday, Nov. 24, 2007, Boise, Idaho, USA
Suburban Sprawl: aerial view of landscape outside Miami, Florida, shows 13 golf courses amongst track homes on the edge of the Everglades.
Toxic Landscape: Aerial view of the tar sands region, where mining operations and tailings ponds are so vast they can be seen from outer space; Alberta, Canada
Ice Waterfall: In both the Arctic and Antarctic regions, ice is retreating. Melting water on icecap, North East Land, Svalbard, Norway
Satellite Dishes: The rooftops of Aleppo, Syria, one of the world’s oldest cities, are covered with satellite dishes, linking residents to a globalized consumer culture.
Child Brides: Tahani, 8, is seen with her husband Majed, 27, and her former classmate Ghada, 8, and her husband in Hajjah, Yemen, July 26, 2010.
Megalopolis: Shanghai, China, a sprawling megacity of 24 Million
Big Hole: The Mir Mine in Russia is the world’s largest diamond mine.
Clear-cut: Industrial forestry degrading public lands, Willamette National Forest, Oregon
Computer Dump: Massive quantities of waste from obsolete computers and other electronics are typically shipped to the developing world for sorting and/or disposal. Photo from Accra, Ghana.
Oil Spill Fire: Aerial view of an oil fire following the 2010 Deepwater Horizon oil disaster, Gulf of Mexico
Airplane Contrails: Globalized transportation networks, especially commercial aviation, are a major contributor of air pollution and greenhouse gas emissions. Photo of contrails in the west London sky over the River Thames, London, England.
Fire: More frequent and more intense wildfires (such as this one in Colorado, USA) are another consequence of a warming planet.