Jude Law in "A.I. Artificial Intelligence" (Dreamworks)

Robots aren't getting smarter — we're getting dumber

Proof that human intelligence is in irreversible decline: A chatbot passed a legendary computing test


Andrew Leonard
June 9, 2014 7:58PM (UTC)

Huge artificial intelligence news! Our robot overlords have arrived! A "supercomputer" has finally passed the Turing Test!

Except, well, maybe not. Here's what actually happened: For five whole minutes, a chatbot managed to convince one out of three judges that it was "Eugene Goostman" -- a 13-year-old Ukrainian boy with limited English skills.

Advertisement:

Alan Turing would not be impressed. We don't know for sure, but when the computing pioneer imagined the possibility that one day computers would be able to "reliably" fool you and me into believing that they were human, he probably assumed that the artificial intelligences of the future would have mastered proper grammar. And he set no bogus five-minute time restrictions, either. Sentience, this is not.

So, raspberries to the Guardian and the Independent for uncritically buying into the University of Reading's press campaign. Extra credit to Vice and BuzzFeed for debunking.

But, the bogosity of Eugene Goostman's artificial intelligence does not mean that we shouldn't be on guard for marauding robots. Because there is a very important lesson to be learned from the Reading AI Turing Test. The AIs may or may not be getting smarter -- but we're definitely getting dumber.

Proof of this arrives in research conducted by a group of Argentinian computer scientists in the paper Reverse Engineering Socialbot Infiltration Strategies in Twitter. The researchers created hundreds of automated Twitter-posting bots, released them into the wild, and tracked how many followers and retweets they picked up -- or whether they were exposed as bots by Twitter and kicked off the network.

Out of 120 bots, 38 were suspended. But, reported TechReview, over the course of the experiment, "the bots received 4,999 follows from 1,952 different users."

More surprisingly, the socialbots that generated synthetic tweets (rather than just reposting) performed better too. That suggests that Twitter users are unable to distinguish between posts generated by humans and by bots. “This is possibly because a large fraction of tweets in Twitter are written in an informal, grammatically incoherent style, so that even simple statistical models can produce tweets with quality similar to those posted by humans in Twitter,” suggest Freitas and co.

(Emphasis mine.)

Advertisement:

Hey, guess what? On Twitter, we're all 13-year-old Ukrainian boys with limited English skills. We have created an environment in which stupid robots can rule.

Seriously, how hard can it be for a bot to imitate doge-speak?

Much Turing. Very AI. Wow.

The language of online discourse, of likes and favorites and shares and 140 character communiqués in which every punctuation mark is a wasted space -- this is a language so imbecilic robots can learn it easily.

And it's probably a language we should unlearn, if we want to maintain our sanity, not to mention our culture.


Andrew Leonard

Andrew Leonard is a staff writer at Salon. On Twitter, @koxinga21.

MORE FROM Andrew LeonardFOLLOW koxinga21LIKE Andrew Leonard

Related Topics ------------------------------------------

Ai Alan Turing Artificial Intelligence Chatbot Eugene Goostman Turing Test

BROWSE SALON.COM
COMPLETELY AD FREE,
FOR THE NEXT HOUR

Read Now, Pay Later - no upfront
registration for 1-Hour Access

Click Here
7-Day Access and Monthly
Subscriptions also available
No tracking or personal data collection
beyond name and email address

•••






Fearless journalism
in your inbox every day

Sign up for our free newsletter

• • •