Artificial intelligence isn’t very intelligent and won’t be any time soon

For all of the recent advances in artificial intelligence, machines still struggle with common sense

Published October 5, 2019 2:59PM (EDT)

 (BEN STANSALL/AFP/Getty Images)
(BEN STANSALL/AFP/Getty Images)

This story originally appeared on Massive Science, an editorial partner site that publishes science stories by scientists. Subscribe to their newsletter to get even more science sent straight to you.
MASSIVE_logo

Many think we’ll see human-level artificial intelligence in the next 10 years. Industry continues to boast smarter tech like personalized assistants or self-driving cars. And in computer science, new and powerful tools embolden researchers to assert that we are nearing the goal in the quest for human-level artificial intelligence.

But history and current limitations should temper these expectations. Despite the hype, despite progress, we are far from machines that think like you and me.

Last year Google unveiled Duplex — a Pixel smartphone assistant which can call and make reservations for you. When asked to schedule an appointment, say at a hair salon, Duplex makes the phone call. What follows is a terse but realistic conversation including scheduling and service negotiation.

Duplex is just a drop in the ocean of new tech. Self-driving cars, drone delivery systems, and intelligent personal assistants are products of a recent shift in artificial intelligence research that has revolutionized how machines learn from data.

The shift comes from the insurgence of “deep learning,” a method for training machines with hundreds, thousands, or even millions of artificial neurons. These artificial neurons are crudely inspired from those in our brains. Think of them as knobs. If each knob is turned in just the right way, the machine can do different things. With enough data, we can learn how to adjust each knob in the machine to allow them to recognize objects, use language, or perhaps anything else a human could do.

Previously, a clever programmer would “teach” the machine these skills instead of a machine learning them on its own. Infamously, this was involved in both the success and demise of IBM’s chess playing machine Deep Blue, which beat the chess grandmaster and then world champion Garry Kasparov in 1997. Deep Blue’s programmers gained insights from expert chess players and programmed them into Deep Blue. This strategy worked well enough to beat a grandmaster, but failed as a general approach towards building intelligence outside chess playing. Chess has clear rules. It’s simple enough that you can encode the knowledge you want the machine to have. But most problems aren’t like this.

Take vision for example. For a self-driving car to work, it needs to “see” what’s around it. If the car sees a person in its path, it should stop. A programmer could provide the car a hint to look for faces. Whenever it sees a face, the car stops. This is sensible but a recipe for disaster. For example, if someone’s face is covered, the car won’t know to stop. The programmer could amend this by adding another hint, like looking for legs. But imagine someone whose face is covered crossing the street with groceries covering their legs. Many real-world problems suffer from this sort of complexity. For every hint you provide the machine, there always seems to be a situation not covered by the hints.

Vision researchers were constructing hints like these until a breakthrough in 2012, when Geoffrey Hinton and colleagues at the University of Toronto used deep learning to forgo manually constructing hints. They “showed” a machine 1.2 million images, from which it constructed its own hints about what components of an image indicated which type of object it was. Based on these hints, the machine was able to categorize complex images, including types of bugs and breeds of dogs, with unprecedented accuracy.

The deep learning breakthrough transformed artificial intelligence. Key deep learning researchers won this year’s Turing Award, akin to the Nobel Prize of computing. Deep learning has also become part of our daily lives. Google’s search engine, Facebook’s social network, and Netflix’s movie recommendations all use deep learning. 

However, artificial intelligence research has suffered from gross underestimates of difficulty from the beginning. A famous gaffe comes from MIT’s 1966 Vision Project, in which an undergraduate was rumored to have been tasked with getting a computer to see like humans do in the course of a single summer.

This is not an isolated incident. The greater history of forecasts in artificial intelligence reveal surprising truths. Expert and public forecasts for human-level artificial intelligence don’t differ significantly and people seem to have strong inclinations to predict 15-25 years out, no matter what year the prediction was made in. And forecasts throughout history, according to those who study forecasts, “seem little better than random guesses.”

Yet the limitations of deep learning are the true cause for concern. Even with the aid of deep learning, machines struggle with concepts that are common sense for humans. An example of this is the difficulty that machines have learning to play video games. A growing community of researchers are using deep learning to build artificial intelligence that can play Atari games. What’s interesting is that some Atari games, like Montezuma’s Revenge, are trivial for children to learn but incredibly difficult for machines, even with deep learning.

The crux of it boils down to keys and doors, and the idea that keys open doors. In other words, common sense. A game like Montezuma’s Revenge reasonably expects the player to know that keys open doors before they play, so that when a door won’t open, the next obvious move is to find a key. But this line of reasoning is nowhere to be found in artificial intelligence without the knowledge a human has of doors. If these were limitations exclusive to video game playing, then maybe it wouldn’t matter. Yet they extend everywhere deep learning is used. Take the above-mentioned self-driving cars, which heavily rely on deep learning. If you place specially designed stickers in clever positions on the road, you can cause self-driving cars to veer into oncoming traffic. Humans generally have the sense not to veer into oncoming traffic.

More work pops up each year addressing common sense and deep learning, though with limited success. As a result, there are some deep learning aficionados with much more sober, humble forecasts of human-level artificial intelligence. Take for instance what Yann LeCun, a recipient of this year’s Turing Award, said: “Machines are still very, very stupid. The smartest AI systems today have less common sense than a house cat.”


By Joey Velez-Ginorio

MORE FROM Joey Velez-Ginorio


Related Topics ------------------------------------------

All Salon Artificial Intelligence Innovation Massive Science Science & Health Technology