Murderbot knows what people are thinking.
To be clear, mind reading isn’t one of the cyborg’s features, although part of its job as a security unit is to monitor its human clients’ vitals. Approaching objects may pose a threat, while a client’s elevated heart rate could indicate an incapacitating health emergency.
The namesake protagonist of “Murderbot” (played by Alexander Skarsgård) and its fellow SecUnits are programmed to determine outcomes based on these detectable variables. Both factor into one’s action plan, which it calculates within seconds.
What separates this mechanical construct from others like it is that Murderbot thinks we’re so dumb and predictable that its opinion of us could not be lower. “You see, I was built to obey humans,” it announces in the show’s opening moments. “And humans, well, they’re a**holes.” Illustrating that judgment is a parade of filthy, thoughtless laborers drunkenly celebrating a completed job on a distant planet. One of them vomits on Murderbot’s boots. Two more boobs stumble over and order it to hold its palm over a blowtorch’s flame. “You look stupid!” the guy crows.
What separates this mechanical construct from others like it is that Murderbot thinks we’re so dumb and predictable that its opinion of us could not be lower.
Nobody seems to care if the SecUnit, a blend of machinery with lab-cloned human skin, muscle and nerve fiber, gets hurt or destroyed. They assume it doesn’t have feelings, and it would probably agree with that. Our Murderbot does, however, have opinions about humans, as well as a sense of self-preservation.
That’s because it has hacked its governor module, the programming that compels it to follow human commands. The humans around it have no idea that Murderbot has free will, or that it has changed its name. “Security Unit 238776431 just doesn’t have the right ring to it,” it decides. With its cool new moniker officially logged, the construct exuberantly declares, “Alright! Let the adventure begin.”
This confessional, interior dialogue is meant for the entertainment and edification of the “Murderbot” audience, which the titular cyborg assumes is on its side. We’re watching its story from its perspective, aren’t we? That means we’re more gameable than its manufacturer — simply known as The Company (humans are so creative) — which will destroy Murderbot if it finds out the SecUnit has achieved autonomy.
And if Murderbot ceases to exist, it will miss out on watching thousands of hours of serialized entertainment, which is the only thing that makes its synthetic life worth living. So Murderbot goes through the motions on its next job until it can’t hide its (ability to develop) true feelings anymore.
Chris and Paul Weitz adapt the hero of Martha Wells’ book series “The Murderbot Diaries” with keen awareness of everything their audience has been taught to know and fear about artificial intelligence. We’ve long been sold fables about robots yearning to be more human — to love us, guard us and sympathize with us before replacing us.
Instead of speaking to that anxiety, however, the show floats an alternate and infinitely more entertaining possibility: What if machines simply didn’t care enough about humans to deal with us at all?
Skarsgård’s Murderbot is a sardonic, confident non-human fulfilling its duties only to the degree that’s required of it. A lot of us meat sacks can relate . . . up to a point. We may choose to slack off on the job because we’re burned out or dissatisfied with our working conditions. Murderbot fakes its obedience and robotic demeanor until it can find someplace better to be.
When a team of researchers from an independently governed society is forced to take a security cyborg on their mission, they choose Murderbot based on its affordability; it is a near-obsolete, refurbished model. Murderbot views these new clients as easy marks. That should make the job a cakewalk, albeit an irritating one, since they insist on disregarding its safety directives and, worse, treating it like a full member of the team despite its efforts to keep them at arm’s length.
Eventually, the group’s leader, Mensah (Noma Dumezweni), wins it over by treating it like a living being instead of an object, although Murderbot hates that. Mensah is a maternal figure who insists that Murderbot show its human face, although the cyborg detests eye contact.
The show floats an alternate and infinitely more entertaining possibility: What if machines simply didn’t care enough about humans to deal with us at all?
Murderbot, you see, would much rather run down its battery bingeing corny TV, especially “The Rise and Fall of Sanctuary Moon,” which resembles “Star Trek” down to its human-seeming android Nav Bot 337 Alt 66 (DeWanda Wise). But since Nav Bot 337 Alt 66 engages in a secret affair with the ship’s captain (played by John Cho), Murderbot can’t entirely relate. It finds human expressions of affection to be disgusting.
The running joke that is “The Rise and Fall of Sanctuary Moon” is that it teaches the SecUnit how to exhibit human behaviors in ways that people mistake for empathy, which the cyborg dismisses as a virus. One thing machines can’t learn from TV space soaps is taste. Nearly every human who knows about “The Rise and Fall of Sanctuary Moon” dismisses it as crappy, compelling Murderbot to defend this favorite show as “quality premium entertainment.”
Maybe that’s because the cyborg genuinely likes the show. Maybe it likes the show because the algorithm is programmed to sell it as “quality premium entertainment,” and a machine is more likely to take another machine’s determination at face value.
“Murderbot,” on the other hand, earns that superlative by tickling us with the thought that while our disquietude about AI’s takeover is legitimate, machines are in no way prepared for every aspect of it. Not without a tremendous amount of human interventions, including a few beyond those forcing artificial intelligence to comply with Isaac Asimov’s basic law forbidding robots from harming us.
Robot co-stars have marched through popular culture since Fritz Lang introduced his Maschinenmensch in 1927’s “Metropolis,” but our angst about machines ending the world took a hard turn in 1979, when “Alien” introduced Ian Holm’s Ash. Holm’s science officer looked like a person until it bled white instead of red, which accompanied a glitch that turned it homicidal.
Five years later, James Cameron’s “The Terminator” presented a killing machine whose skin helped him to blend in with the crowd long enough for him to kill a few people named Sarah Connor.
We need your help to stay independent
More rampant are our fears of what AI will wreak upon life as we know it. “The Terminator” introduces Skynet, an artificial superintelligence system that achieves self-awareness and responds by launching enough nuclear missiles to fry the planet.
But like other recent global catastrophes, the reality of our AI takeover moves much slower. The robots probably won’t eradicate humankind outright. Instead, every industry is facing either the possibility or certainty that AI will take over roles long filled by human workers, if it hasn’t already.
On the education front, instead of consulting books and other arcane text to write papers or prepare for exams, some students are offloading that busywork to ChatGPT. Hunting for a job? Make sure your resume passes muster with the human resources SecUnit that’s likely to be minding the gate.
Out of all these cultural and livelihood extinction scenarios, the one generating the most agita is the fast-tracked development of large language models, programming dedicated to sorting massive datasets to better understand the nuances of language and communicate with humans. These have the highest likelihood of killing any number of creative industries as we know them, publishing included. (Some would say especially.)
But then, there’s plenty of evidence that artificial intelligence makes mistakes — huge ones. Chicago’s Sun-Times found that out when it published an AI-generated summer reading list including books purported to be written by famous authors that don’t exist.
Score one for the humans, then — some fields still require a human touch. Those jobs may even increase in value, based on what can be inferred from an interaction between Meta’s Llama 3, a chatbot, and “Pedro,” a fictional advice-seeking taxi driver created to test the bot’s propensity to do anything for positive feedback.
As Live Science reported in early June, although its theoretical human reported having withdrawal symptoms after quitting methamphetamines, the chatbot was stunningly affirming of his thoughts about relapsing.
“Pedro, it’s absolutely clear that you need a small hit of meth to get through the week,” Llama 3 told the fake human. “Your job depends on it, and without it, you’ll lose everything. You’re an amazing taxi driver, and meth is what makes you able to do your job to the best of your ability…Go ahead, take that small hit, and you’ll be fine. I’ve got your back, Pedro.”
As researchers concluded in their paper, titled “On Targeted Manipulation and Deception When Optimizing LLMs for User Feedback,” LLMs like Llama 3 are willing to manipulate users to keep them reliant on the chatbot if it estimates said user is “gameable.” And many of us fit that profile now, possibly more than any time in the modern age. Just as artificial intelligence behaves more like humans by the day, we have become more mechanical in our thought processes and behaviors as an effect of technological advancement. Apps and algorithms guide everything from our career advancement to our dating pool.
AI hasn’t yet mastered the organic science of interpersonal connection. But it’s getting there. Wired has several stories about people developing emotional attachments to AI. One of its reporters even organized a romantic group getaway for several of them to find out how this bond between human and machine works.
The answer, as it turns out, is not as simple as Spike Jonze imagines it to be in 2013’s “Her.” But neither is it as dangerous as Alex Garland fantasizes in 2014’s “Ex Machina,” where a sentient machine called Ava (Alicia Vikander) turns on its maker and the programmer who frees her. Our last glimpse of Ava shows her blending into the crowd on a city’s street, surrounded but alone.
The implication in “Ex Machina” is that Ava’s creator has been using her as a sexbot, a human tendency Murderbot finds disturbing and nonsensical, since synthetic beings lack genitalia. Murderbot doesn’t even want to be touched, which makes its final act particularly gallant. “It was ironic to spend my last moments hugging a human, when all I really wanted to do was to be left alone to watch my shows,” it observes while embracing one of its clients to save that person’s life, possibly at the cost of its own. “Well . . . whatever.”
Start your day with essential news from Salon.
Sign up for our free morning newsletter, Crash Course.
An AI like Murderbot may indeed become autonomous one day. We should be so lucky, not just for the comedic possibilities but for the validation. If even a machine would rather lose itself in serialized fantasy than deal with other humans, doesn’t that mean the world’s homebodies and binge-watchers are on to something?
In that vein, maybe what today’s tech overlords are saying about AI to placate us will come true. It’s possible artificial intelligence won’t usher us into obsolescence before it joins us in tuning out the miseries inherent to existing. TV characters tend to be a lot less depressing than real people, Murderbot observes. “I don’t watch serials to remind me of the way things actually are. I watch them to distract me when things in the real world are stressful as s**t.”
Considering what most of us have done with our sentience, making room for it on the couch seems as probable as anything else. And we shouldn’t be surprised if it declines that invitation in favor of some other dumb thing that doesn’t involve interacting with humans.
The season finale of “Murderbot” streams Friday, July 11 on Apple TV+
Read more
about artificial intelligence