Bill Gates and Elon Musk are wrong: Artificial intelligence is not going to take over the world

Even the simplest bacteria are smarter than the most advanced AI

Published October 15, 2015 10:56PM (EDT)

  (Shutterstock/Salon)
(Shutterstock/Salon)

This article originally appeared on AlterNet.

AlterNet There's been a lot of chatter in the past year about the menace of artificial intelligence. This summer's movie season offered four films featuring major AI characters, most recently Terminator Genisys, but alsoAvengers: Age of Ultron, Ex Machina, and Chappie. Following in the venerable virtual footsteps of HAL from 2001: A Space Odyssey, for the most part, Hollywood is in the habit of painting its AI characters as sociopaths bent on the destruction of the story's protagonists as well as all mankind. In the interminable Terminator franchise, the "machines" have even learned how to travel through time, bending not only the laws of physics, but also the audience's tolerance for convoluted plot twists.

Hollywood's fixation with the threat of AI only exacerbates the public's predisposition to worry over abstract bogeys while ignoring more pressing concerns like climate change. It doesn't help when arguably the world's most famous scientist, Stephen Hawking, reinforces the hysteria with references to yet another movie, Transcendence, featuring a sociopathic AI antagonist.

Where the oracle of Cambridge goes, tech billionaires dutifully follow. So Elon Musk, Bill Gates and Steve Wozniak have piled onto the fearmongering. Taking full advantage of the bully pulpit afforded him by his revered standing as a leading theoretical physicist, on numerous occasions, Hawking has gone on the public record with ominous pronouncements about AI. In an interview with the BBC, he said that the "development of full artificial intelligence could spell the end of the human race." This, he explains, is likely because "humans, who are limited by slow biological evolution, couldn't compete and would be superseded."

Stephen Hawking has given us mind-boggling insights into the nature of the cosmos, most notably the notion that black holes, contrary to the name, actually emit radiation. He's won just about every accolade for which physicists are eligible, short of the Nobel Prize. But like so many things in the cult ofscientism, physicists, as public intellectuals, take on the role high priests once played in pre-industrial societies. Hawking's proclamations on matters well beyond his expertise take on an almost oracular significance.

As great a cosmologist as Hawking undoubtedly is, he's a lousy philosopher of science, let alone predictor of the far-off future. He himself has dismissed philosophy altogether—as nonsense rendered obsolete by the omniscience of monolithic science. In his blinkered rationalist worldview, he's probably referring exclusively to analytic philosophy, which indeed has lost much of its credibility due to its obsession with aping the technical rigor of mathematics. But to dismiss philosophy as a whole is to betray a disturbing indifference not only to a rich Western intellectual tradition, but to the limits of science.

Since it's far-fetched to imagine Hawking poking a stick into a steaming pile of goat entrails as he, in a trance state, practices the ancient art of haruspicy, we might as well think of him practicing a decidedly more modern form of divination. Let's call it algorithmancy, divination by algorithm, which is the method of choice for technophiles worldwide.

Part of the problem is that the term "artificial intelligence" itself is a misnomer. AI is neither artificial, nor all that intelligent. As any food chemist will tell you, beyond the trivial commonsense definition, the distinction between natural and artificial is arbitrary at best, but more often than not, ideologically motivated. AI isn't artificial, simply because we, natural creatures that we are, make it.

Neither is AI all that intelligent, in the crucial sense of autonomous. Consider Watson, the IBM supercomputer that famously won the American game show "Jeopardy." Not content with that remarkable feat, its makers have had Watsonprepare for the federal medical licensing exam, conduct legal discovery work better than first-year lawyers, and outperform radiologists in detecting lung cancer on digital X-rays.

But compared to the bacteria Escherichia coli, Watson is a moron. The first life forms on earth, bacteria have been around for 4 billion years. They make up more of the earth's biomass than plants and animals combined. Bacteria exhibit a bewildering array of forms, behaviors and habitats. Unlike AI, though, bacteria are autonomous. They locomote, consume and proliferate all on their own. They exercise true agency in the world. Accordingly, to be more precise, we should say that bacteria are more than just autonomous, they'reautopoietic, self-made. Despite their structural complexity, they require no human intervention or otherwise to evolve.

Yet when we imagine bacteria, we tend to evoke illustrations of them that misleadingly simplify. On a molecular scale, it's more apt to imagine each bacterium not as a pill with a tail, but as James Lovelock, the originator of the Gaia hypothesis, has it, like the island of Manhattan. Within the bacterium, one of the simplest living things on earth, an array of structures—capsule, wall, membrane, cytoplasm, ribosomes, plasmid, pili, nucleoid, flagellum—work in concert to common ends. In this astonishing symphony of internal and external organic activity, there is undeniably a vast intelligence at play, an intelligence of which we, as rational scrutinizers, have but a dim grasp.

Here's an example that helps to illuminate the stark contrast in intelligence between a lowly single-celled organism like Escherichia coli and AI. A few years back, the noted Wall Street quant, D.E. Shaw started a not-for-profit spinoff of his hyper-successful hedge fund with the sole purpose of analyzing the molecular dynamics of proteins within cells in the hopes of laying the groundwork for future medical breakthroughs. He skimmed off a pile of cash from his vast fortune and hired a platoon of scientists and engineers to make this possible. They custom built their own supercomputer, which they dubbed Anton, in honor of the famous Dutch inventor of the microscope, Anton van Leeuwenhoek.

Anton and its successors spend their days bringing world-class processing power to bear on the stochastic wrigglings of protein chains. Let's come back to Lovelock's analogy that, in terms of complexity, equates a single cell to the island of Manhattan. Extending the analogy, what Shaw and his minions are doing with their superexpensive supercomputers is modeling one apartment building, say, at the corner of East 82nd and York. Shaw would need an army of Antons to even begin to approximate the hubbub of the entire island.

Human behavior, in all its predictably irrational glory, is still the culmination of a complexity that dwarfs the relative primitiveness of the bacterium. Our bodies consist of 10 trillion eukaryotic cells working in concert with 100 trillion non-human guest cells. Our minds—grounded in these bodies—interact with a vast, dynamic world. Max Galka, an expert in machine learning, says that "machines have historically been very bad at the kind of thinking needed to anticipate human behavior." "After decades of work," he points out, "there has been very little progress to suggest it is even possible."

So this is where Hawking's conception of AI falters. He admonishes that "there is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains." This is the perspective of a physicist, indoctrinated at a formative age, in the "brain as computer" notion of human intelligence. But brains are organs in bodies made up of cells, and intelligence is much more than merely making "advanced computations." The molecules that make up the protein chains within a bacterium are doing more than jostling each other randomly. Something akin to "advanced computations" occur here as well, inasmuch as bacteria cohere as living things.

To be sure, bacteria may not be rational in the proscribed way rationalists like Hawking define it, but they certainly exhibit an intelligence. They have needs. They act in the world to meet those needs. As such, prokaryotes, bacteria and their single-celled brethren, the Archaea, are much more than the organic equivalent of machines. Machines may act in the world, but they don't have agency. Machines extend the agency of those who make and use them. Machines are akin to prosthetics.

Accordingly, a more apt term for artificial intelligence is cognitive prosthetics. AI augments people's thinking, their goals, drives and prejudices. I say "people" and not "human" because I don't want to imply a universality. Particular instances of AI, which is essentially computer code, are written and executed on computer networks by people in specific social contexts. An AI's drives are a reflection of the idiosyncrasies of the individuals who write its code. As John Havens, author of the forthcoming book, Heartificial Intelligence, puts it, "Since humans are programming the code for AI, this essentially means we have to codify our own values before programming them." Not only must we codify our ethics in them, such values-inculcation is unavoidable. Watson, along with Anton, have been set by their makers to work on specific tasks that serve the interests of the organizations that fund them. They are we.

Some of the Chicken Littles decrying the rise of the machines point ahead to the inevitable moment when our AI servants become "self-aware." Perhaps enraptured by millenarian fervor, Hawking has called this moment the "intelligence explosion." Others prefer what Vernor Vinge has dubbed, with all the histrionics of pulp science fiction, the "singularity." At this grand reckoning, AI will figure out it's in competition with us for resources, conclude we're a nuisance obstructing the realization of its own glorious civilization, and, with nary an inkling of compunction, mobilize to exterminate us.

But this presentiment again betrays an ignorance of how truly autopoietic intelligence functions. Rationalists like Hawking tend to think of consciousness as a magical essence that inheres in the human brain. Given the right recipe of algorithms and data, a comparable sentience could magically erupt from the matrix of code that haunts our precious supercomputers. It's a delicious irony that those like Hawking who are so quick to scoff at the absurdities of intelligent design are also the ones who intone that we puny humans will create life out of 0s and 1s.

The improv director Keith Johnstone wrote, back in 1979, that "normal consciousness is related to transactions, real or imagined, with other people." I would take that a step further and say, echoing the philosopher Alva Noë, that consciousness, normal or otherwise, emerges from interactions with people, real or imagined. The brain hosts consciousness, yes, but the brain isn't in a jar. By obvious necessity, it works in conjunction with the vast, largely unconscious intelligence that is the body. Crucially, though, consciousness also arises from our interactions with each other. In effect, consciousness is just as external to the body as it is internal. It's distributed across the network of interactions, both symbolic and physical, that we have with each other. Sentience is social.

For any AI to become self-aware, it would have to become other-aware, since the self has no meaning outside of a social context. And to properly socialize in this way, our hypothetical AI entity would require a body beyond the circuits that comprise the internal environs of computers. Like brains, AI can't know itself without interacting with others in the world through a body. An AI's sense of self would emerge from this coupling of its body to a community of other bodies in the world. Most AI professionals realize this. The executive director of the Data Incubator, Michael Li, writes:

Early AI research...thought of reasoning as a totally abstract and deductive: brain in a vat just processing symbolic information. These paradigms tended to be brittle and not very generalizable. More recent updates have tended to incorporate the body and "sensory perception", inductively taking real-world data into account and learning from experience—much as a human child might do. We see a dog, and hear it barking, and slowly begin to associate the dogs with barking.

Given the practical limitations of AI, we're left wondering what Hawking the Diviner is really getting at when he warns of an imminent AI coup. Since it's more apt to think of AI as cognitive prosthetics, it behooves us to trace the augmentation back to its source. I wonder, then, if what really worries Hawking is not some hostile Other like Agent Smith from The Matrix, but the more pedestrian Other that is other people.

This becomes readily apparent when we consider a thought experiment popular with the AI crowd: that a superintelligence might destroy us inadvertently in pursuit of its single-minded goal to manufacture paper clips. The image of the paper clip is meant to be whimsically arbitrary. It's a stand-in for any goal that might be radically extraneous to our own. But the choice of the paper clip is telling. This seemingly innocuous image is an exemplar of an artificial intelligence that exists in our world today and that threatens to destroy us just as surely as Ultron.

We call it bureaucracy. Bureaucracy is a superintelligence that transmutes individuals into a machine. While utterly dependent upon it, we all mistrust, even begrudge it. It's the DMV. It's Big Data. It's Wall Street. It's the Deep State. What's most intimidating about bureaucracy is that it's a human-machine hybrid, a cyborg. We tend to think of cyborgs in the vein of the Terminator or Robocop, but the most terrifyingly sublime form of the cyborg is the one where a tech-enabled system does the "thinking," while people are the cogs. Like The Matrix. Or McDonald's.

Unfortunately for us, though, despite Hawking's perseverations, the "singularity" of the cyborg bureaucracy is already upon us. It was there when Sumerian satraps began scoring clay tablets to account for the harvest. And it persists today in the joystick jockeys in Arizona trailer parks pulling triggers that unleash deadly airstrikes from Predators buzzing over Pakistan.

When physicists and tech billionaires publicly hand-wring over an AI apocalypse, politely suggest to them that they get out of their heads. If they're feeling social, they might even sit in on Dr. Li's data scientist incubator, where they'll get a crash course on how big business uses AI to better serve customers, and just maybe, to make a profit.

Hopefully, in the near and long term, our corporate overlords—and the bureaucracies that empower them—won't destroy our habitat in the monomaniacal pursuit of ever more paper clips.


By Sean Miller

MORE FROM Sean Miller


Related Topics ------------------------------------------

Ai Alternet Artificial Intelligence Bill Gates Elon Musk Innovation Partner Science Technology