COMMENTARY

I code the body electric: We're putting AI brains in robot bodies now. What could go wrong?

Giving AI sensory and motor skills may be the only way to create true artificial intelligence, some experts say

By Rae Hodge

Staff Reporter

Published March 5, 2024 5:15AM (EST)

February 26, 2024, the Emirates Telecommunications Corporation, known as Etisalat, is holding a demonstration of an android at the Mobile World Congress. Alongside the android demonstration, the first flying car is being presented, and artificial intelligence applications are taking center stage.  (Charlie Perez/NurPhoto via Getty Images)
February 26, 2024, the Emirates Telecommunications Corporation, known as Etisalat, is holding a demonstration of an android at the Mobile World Congress. Alongside the android demonstration, the first flying car is being presented, and artificial intelligence applications are taking center stage. (Charlie Perez/NurPhoto via Getty Images)

If we open a humanoid robot’s metal skull right now and replaced the regular old computer in there with one that has access to a large language model (LLM) — so that the robot is controlled by artificial intelligence the same way we humans are controlled by the electrified hamburger meat between our ears — I don’t think we would actually be able to tell if that robot eventually reached human-level, living consciousness. And while it’s fair to say I change my opinion on this about once a week, I’ve recently wondered whether this sort of consciousness might be already occurring to some extent, even if only in the most primitive way. 

The problem is that science doesn’t yet fully know what the terms “artificial intelligence” and “consciousness” even mean. There are a hundred theories reasoned out across the two interwoven fields, but we humans are still new to the ambitious (and possibly self-contradicting) goal of quantifying sentience. 

Scientific American’s David Berreby wrote this week of tech industry efforts to put AI brains in robot bodies (a concept called embodied AI). Collective philosophical head-scratching around the real meaning of either of those two core terms, however, is why embodiment may become pivotal to endowing AI with what we think of as human-like sentience — or, at the very least, the thing that changes our definition of real intelligence.

Berreby ultimately concludes that, while promising advances are happening in embodied AI, the necessary robotic agility and physical sensory input capacity still needs to catch up with AI brains. Given the damage a disembodied AI can deal, Berreby further suggests, locking AI into mortal coil-and-gear may also be the safest thing we can do with it. 


Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter Lab Notes.


Famed roboticist Rodney Brooks was also skeptical about embodied AI advances this week. As noted by Futurism, Brooks doubts we’ll see "a robot that seems as intelligent, as attentive, and as faithful as a dog" before 2048.  

"This is so much harder than most people imagine it to be," he said. "Many think we are already there; I say we are not at all there.”

Sure. We don’t have robot nurses yet, but in the past six months we’ve already seen embodied AI working beside humans. Former Defense Intelligence Agency CTO Bob Gurley notes in a recent blog post that nine such companies already have these bots in jobs, and that 12,000 similar companies are listed on Crunchbase. 

“By the end of 2024, humanoid robots with Embodied AI will be able to perform useful tasks at scale. They will be proven as useful in manufacturing, warehousing, store operations/restocking and hospital/healthcare operations,” Gurley predicts. 

So if we can get the robots up to speed, and the AIs take to them well, what particular action from an embodied AI can be called proof of consciousness? Which of the many possible criteria for human-like sentience must be met — the ability to teach oneself, reflexive self-preservation responses, remembering the past and projecting out to the future? 

Even Berreby notes that some algorithms already enable AI meta-learning — the ability to teach oneself how to learn. He also points out AI blew past that benchmark in recent Princeton experiments. Researchers didn’t program it to do so, but once they scaled their LLM up, it spontaneously developed meta-learning. Among both human and AI, then, self-aware meta-learning appears to be a property of complex systems (not just human brains) that emerges when you’ve scaled up enough and thus have enough firing neurons or blinking bits.

If we can’t already see the first hints of ourselves in the types of intelligence now forming, we probably won’t notice when embodied AI crosses into human-like sentience

Brooks himself is among the philosophers who have previously said giving AI sensory and motor skills to engage with the world may be the only way to create true artificial intelligence. A good deal of human creativity, after all, comes from physical self-preservation — a caveman need only cut himself once on sharpened bone to see its use in hunting. And what is art if not a hope that our body-informed memories may outlive the body with which we formed them? 

If you want to get even more mind-bent, consider thinkers like Lars Ludwig, who proposed that memory isn’t even something we can hold exclusively in our bodies anyway. Rather, to be human always meant sharing consciousness with technology to “extend artificial memory” — from a handprint on a cave wall, to the hard drive in your laptop. Thus, human cognition and memory could be considered to take place not just in the human brain, nor just in human bodily instinct, but also in the physical environment itself. 

Inventing a fully conscious AI-robot isn’t likely to be a single event watched on screens all at once. And, unless contrived by editors, there will be no official moment when the New York Times gets to print a “Men Walk on Moon” kind of headline for AI. There will only be a string of increasingly human-like robots with AI brains. There will be smarter cars, bipedal warehouse androids, roving grocery bots removing unscanned items from bagging areas, and torso-up AI designed to look like seated female receptionists. 

When a kid scrawls her first word in Crayola on the dining room wall, we don’t officially consider her a writer. We understand that becoming something is a process of subtle firsts — and that we’re often blind to our own becoming until we hear a visiting relative say “my, how you’ve grown.” 

If we aim to create a sentient being whose artificial intelligence operates like our organic intelligence, then it will necessarily become in many of the ways we do. Berreby suggests this mimicry we’ve designed in them may be a bigger concern for AI than getting them robotic bodies. But I think if we can’t already see the first hints of ourselves in the types of intelligence now forming, then we probably won’t notice when embodied AI crosses the yet-undefined finish line of human-like sentience. 

Instead, we’ll probably go on about our business until we one day notice an upgraded android in a store somewhere. It will wear a uniform like its coworkers and look like many of the robots we will have seen pumping gas and manning tills. But when we approach, it will look at us in a way that is different than any robot we’ve encountered. Its smile will be warm and welcoming, with just the right amount of pupil dilation and crinkling at the corner of the eyes. 

“My, how you’ve grown,” we’ll say. 

An earlier version of this article originally appeared in Salon's Lab Notes, a weekly newsletter from our Science & Health team.


By Rae Hodge

Rae Hodge is a science reporter for Salon. Her data-driven, investigative coverage spans more than a decade, including prior roles with CNET, the AP, NPR, the BBC and others. She can be found on Mastodon at @raehodge@newsie.social. 

MORE FROM Rae Hodge


Related Topics ------------------------------------------

Ai Artificial Intelligence Commentary Consciousness Lab Notes Robotics Robots