Jaron Lanier (AP/Michael Probst)

Don't trust people who worship technology

Jaron Lanier and Andrew Keen make powerful arguments against tech utopianism


follow us in feedly
Scott Timberg
March 10, 2016 10:02PM (UTC)

A debate on artificial intelligence took place at the 92nd St Y on Wednesday night, and it took on real sweep that went beyond the subject of AI. This wasn’t just because the victory of a Google bot over a South Korean champion at the ancient game of Go has made the matter of AI especially pressing. Wednesday's discussion highlighted some of the best – and worst – ways we think about technology. The whole debate, called “Don’t Trust the Promise of Artificial Intelligence,” is worth watching.

The best came from Jaron Lanier, who has an important place in these conversations. One of the key ways cyber-utopians shut down anyone who express doubts about technology is to brand them as a “Luddite,” but this computer scientist who helped develop virtual reality and recently sold a company to Google is hard to tar that way. On Wednesday night, he gave a technical description of how AI worked and didn’t, and he concluded with one of the sharpest things anyone has said about our assumptions about the digital world.

Advertisement:

To Lanier, AI evokes a kind of religious faith.

I absolutely believe in religious freedom, and I would never, never, never speak against somebody's beliefs. I respect them. All I ask for is the separation of church and state. Without a separation of church and state, there can be no religious freedom. Never more true than when it comes to AI.

(Lanier and fellow skeptic Andrew Keen, who came out a little later, spoke eloquently about the way digital technology erodes employment. But let’s stick with the religion idea for now.)

From Martine Rothblatt, the CEO of United Therapeutics and author of “Virtually Human,” came the kind of utopian talk we often hear from transhumanists. She describes her hopes for AI, and it’s hard not to shudder a bit.

“The promise of artificial intelligence… is a replication of human consciousness,” she said, explaining how it could help us manage traffic, air travel, healthcare. She also spoke about the way A.I. could help people with dementia. Well, this sounds good. Who can object to any of that? But what seems more likely is that on the way, lots of people will be put out of work. And her next idea may be the most chilling of the evening:

 

We will love these AIs. We'll love them in the same way that we love our cats and our dogs, that we love our friends, ones that we see distantly or the ones that we see frequently, because if it is a replicated human mind it'll have all the cool features of human minds, being able to answer questions, be able to really frame the rest of the sentence before a sentence is finished, be able to feel empathy and when we're sad help us feel better, and we're happy joining in that joy.

 

Does it make me a Luddite to hear something a little creepy in this? It’s almost literally the premise of Philip K. Dick’s “Do Androids Dream of Electric Sheep?,” the novel on which “Blade Runner” was loosely based.

Rothblatt also argues that we will pick the good AI and not the bad stuff.

 

AI will arrive in a natural environment in which humans are the agents of selection. We will select for the friendly AI and we will stamp out the unfriendly AI. So I believe that the promise of AI will be a good one and we should believe it because the environment in which AI evolves will be a human selection environment and the mass activities of hundreds of millions of people will select for the friendliest AI.

Advertisement:

Friendly AI sounds like a great idea. But since when has technology ever worked like this – only arriving in forms that everyone likes? What about people who design viruses that crash your computer? What about weapons that fall into the hands of terrorists? What about digital surveillance technology? To think it will almost all go on the right direction it a bit strange.

There’s nothing wrong with technology, digital or otherwise, as long as we recognize its limits and see it for what it is. ”We’re not against the technology itself,” Keen, author “The Internet is Not the Answer,” pointed out. “There’s a problem not with the technology… but with the ideology around the technology.” He says it’s become a new kind of “liberation theology.” Keen asks: “Who is going to own these platforms?... We haven’t thought this stuff through.” (James Hughes of the Institute for Ethics and Emerging Technologies was also part of the debate.)

What the happy talk about AI and technology shows is that for some people – even for very smart people like Rothblatt – digital technology has indeed become a religion. And it summons a kind of awe that could be very dangerous.


Scott Timberg

Scott Timberg is a former staff writer for Salon, focusing on culture. A longtime arts reporter in Los Angeles who has contributed to the New York Times, he runs the blog Culture Crash. He's the author of the book, "Culture Crash: The Killing of the Creative Class."

MORE FROM Scott Timberg

BROWSE SALON.COM
COMPLETELY AD FREE,
FOR THE NEXT HOUR

Read Now, Pay Later - no upfront
registration for 1-Hour Access

Click Here
7-Day Access and Monthly
Subscriptions also available
No tracking or personal data collection
beyond name and email address

•••


Fearless journalism
in your inbox every day

Sign up for our free newsletter

• • •