What tech calls "AI" isn't really AI

Facial recognition, surveillance and helping you buy things are a far cry from android sentience

Published April 29, 2018 10:00AM (EDT)

Robots, produced by Knightscope, are intended to assist in crime prevention and law enforcement. (Getty/Rob Lever)
Robots, produced by Knightscope, are intended to assist in crime prevention and law enforcement. (Getty/Rob Lever)

Our world will never invent AI.

Not because the laws of physics or nature prevent it (although they might), but because our society will fall short of the task.

The phrase "our world" is the key takeaway in the first sentence. In other words: Artificial Intelligence is not impossible, just extremely unlikely in our historical moment. The jury is still out, although there may not be a cosmic rule that stops human beings from building digital minds. AI might be doable. But technocratic, late-capitalistic, postmodern 21st-century America will not have that honor, I'm afraid. A different civilization could invent AI; a different human world with different human priorities and a keener eye and deeper understanding could take the prize. It just won't be us. Our cultural, social and economic systems are dead-set against it.

The problem with AI

The fate of AI is best expressed by adapting a version of the "Sherman Statement." The statement is a piece of political jargon named after Civil War General William Tecumseh Sherman, who definitively did not want to be president. During his lifetime, Sherman reportedly said "If drafted, I will not run; if nominated, I will not accept; if elected, I will not serve."

AI is improbable for the following reasons: the engineers who will supposedly build it are too easily satisfied, the plutocrats who would fund such a venture are indifferent, and the scholars who would judge it will be easily fooled. Specifically, there are three major obstacles on the road to building a better mind.

First, the problem itself is poorly defined: what do you mean by intelligence? Nature, with all her blind hideous strength, endless experimentation and wild wastes of infinite time, has only managed the trick once (by our narrow definition), with one species of tree-ape on a rolling green world. Even if you believe there's intelligent biological life elsewhere, the stats aren't promising. Eternity had forever to crack the code. We have minds, but less patience.

When we speak of creating AI, what we really mean is "brains like ours." But why? Intelligence is abundant in nature. Here's how mankind's vanity warps our considerations: We rank ourselves first in cognition, our close relatives (the great apes) next, followed by animals who serve human purposes and human designs. We favor animals that seem more like us. We judge all creatures great and small by how close they are to our brand of thought.

This is nonsense. Why should human smartness bear the sovereign stamp of intellect? Who made us the yardstick for brains? The intelligence of an anthill is opposite that of a human's, but ants are very, very successful at being eusocial insects. They're more widespread than we are, there are more of them, and they'll probably be around longer than us. Ravens have a non-primate problem-solving intelligence. Where's their Nobel?

Don't get me wrong. I'm Trumpian on humanity: We are the splashiest, most tremendous species. But if we're honest about our search for intelligence, we have to admit our own biases.

Second, there's the problem of engineering and building the thing. We don't actually know how our own brains work, and it may be impossible to understand how consciousness and intelligence function, in the same way that the puzzle of free will may prove unsolvable. We barely understand the mind as it is; we're in the dark ages of neuroscience and neuroanatomy, to say nothing of the philosophical riddles behind the problem. Innovating our way to androids would require several hundred technological leaps we lack the capacity for; we can't design a garden slug, much less a toddler. We've barely gotten around to building robots that can balance and walk at the same time. Mind will not slip its meat shackles anytime soon.

Third: the problem of judgment. Apart from the issues of human bias and engineering, how will we be able to tell AI is, well, AI? The Turing Test, you might say. Ah, but we are credulous, self-deceiving creatures. What is more likely to happen first: that human beings are able to create an artificial mind, or that humans build clever programs that can fool most of us into thinking they're thinking? The latter is much easier than the former. As David Hume once asked, what's more likely: a miracle, or somebody lying about a miracle? We’ve built CGI dinosaurs, but we’re nowhere close to cracking their genetic code. Where human beings are concerned, the power of illusion usually arrives first.

The material issue

AI promoters tout AI as a new species. Given the right conditions, they say, it will inevitably erupt from computer technology. But evolution disagrees. Look at the ecosystem which, in theory, will eventually birth AI: Silicon Valley. What kind of projects does the Valley favor? Thoughtful, long-term projects that might span several human lifetimes? Or quick, fly-by-night, grab-a-buck app-ifications? If an engineer lucked into real, honest-to-God AI tomorrow, they would use it to make Uber Eats more efficient. I’m dead serious.

In reality, so-called "AI" is being designed to make companies more profit, not to create the next HAL 9000. That is the selection pressure at play. When the talking heads and tech journalists and silicon bros discuss "AI assistants" like Siri or Alexa, they don't actually mean AI. They're not talking about virtual minds. To quote a great man, "You keep using that word; I do not think it means what you think it means." What they're describing is technology that forces human beings to change their behaviors to suit the machine.

As I've written elsewhere, when a bank switches to an automated teller, they're not employing AI. What they’re doing is forcing you to give up a piece of your time and comfort so they can save a few bucks on jobs. This is not AI. Machines that adopt human mannerisms or echo human behaviors are not artificially intelligent. If aping mankind equaled AI, then there are some really nifty American Girl Doll collections that Elon Musk should look into.

At present, sci-fi AI does not exist. Large spreadsheets exist. Predictive text exists. Pattern-seeking algorithms exist. But these are not minds. When I see how easily the public and industry titans project mind-ness on these digital card-tricks, I grow skeptical about our capacity for Turing-style discernment.

The moonshot that wasn't

But these three objections don't address the most practical problem: Even if the above conditions were met, we are not serious about Artificial Intelligence. How do I know this? Because we've entrusted the project over to car builders and Mark Zuckerberg. Those are the people funding AI: folks who want slightly cleverer bots, slightly more accurate advertising algorithms. And they will declare victory the moment that "AI" becomes good enough to be profitable. That's how we do things here. Breathing life into a digital soul would be the equivalent of the moonshot, and we've given the charge to app designers. That's how earnest we are.

So, the problem of AI, rendered into Shermanesque terms: If AI is scientifically possible, it will not be discovered anytime soon; if it is discovered, it will not be funded; if it is funded, it will not be done to completion; if it is done to completion, we wouldn't know.

Of those four conditions, the most relevant are the last two: the market and the test. There is a yawning universe of difference between a Program Good Enough to Fool Humans, and a Program that Is Actually Self-Aware. Even if we could tell, I suspect that we won’t care. We won't care, our systems of funding won't care, and we won't get close enough to tell the difference. If we truly want to create self-aware minds, we should start with ourselves. AI means As If.


By Jason Rhode

Jason Rhode is a writer from West Texas. He has been published by Paste Magazine, McSweeney’s, The Comics Journal, and Monkeybicycle. He appeared on an April 2011 episode of Jeopardy, and one day he will command all the good lads in Eastcheap. Follow him on Twitter at @iamthemaster.

MORE FROM Jason Rhode


Related Topics ------------------------------------------

Alexa Alphabet Amazon Artificial Intelligence Google