Forty years ago, “RoboCop” imagined a world in which a police officer could be recast as a highly efficient, if inhuman, crime-fighting cyborg. Today, in an age when artificial intelligence is promoted as synonymous with modernity, police departments across the U.S. are embracing all manner of machines that are not only, by definition, inhuman — but also, according to critics, not very efficient at actually maintaining public safety.
Over the last few years, police have arrested scores of people using AI facial-recognition tools, only to find out later that many of them were miles away from where an alleged crime took place, or physically incapable of committing it. Most of those false leads targeted people of color. Those incidents potentially reveal the material function of AI tools that are increasingly being used by law enforcement, as well as the pitfalls of automating not only the management and discipline of specific communities, but also the legitimacy of their use of force.
Humans often defer to AI because its calculating, unfeeling nature implies objective authority, ignoring the reality that AI knowledge learns from the past to predict the future. In American policing, that past is defined by the systemic over-policing and containment of working class Black and brown communities. If police forces are already marked with bias, then the AI tools they use would serve as an automator of existing hierarchies rather than questioning or correcting them.
“Large language model vendors, the big tech companies that are operating these platforms, are never going to talk about what can go wrong. They never talk about hallucinations. They never give any health warnings to the public. It’s not in their self-interest,” Graham Lovelace, a journalist and writer about AI technology, told Salon. “But this technology can be highly unreliable, and it can cause harm.”
For example, Lovelace explained, “If the system throws up an image of a face and says that’s the guy they want, the tendency would be for law enforcement officers to take it for granted because of the way they’ve been conditioned by culture and by those AI companies to overvalue its effectiveness. They’re not likely to say, ‘actually, it’s not that person, that person was nowhere near the scene of the crime.'”
While the tendency to trust AI is seemingly universal, the stakes are not always so high. Because police have a monopoly on violence in their communities, the stakes of technological misuse can encompass life and death. AI technology ensures that even those who are later found innocent remain in the surveillance cycle.
“Once you’re targeted, you tend to receive a label on you for whatever aroused police suspicion, then all your information is taken, maybe all your biomedical records are then recorded, you’re in the databases now, and you’re probably going to be seen as a suspect in the future,” Lovelace said.
“This technology can be highly unreliable, and it can cause harm. If the system throws up an image of a face and says that’s the guy they want, the tendency would be for law enforcement officers to take it for granted.”
Critics warn that AI tools, by providing a shortcut for high-stakes decision-making, amplifies unethical or oppressive practices that may already be rampant. When a predictive policing tool like Geolitica (formerly PredPol) marks a neighborhood as a crime “hotspot” based on a police activity rather than actual criminal activity, it justifies deploying yet more officers in already militarized areas and ensures that the poor remain policed and contained rather than supported. Transcription tools like Axon’s “Draft One,” which draft reports based on body camera audio, might introduce cognitive laziness into the legal record: if an officer doesn’t rigorously fact-check an AI-generated document, hallucinated details or misattributed quotes could become permanent, sworn testimony.
Some AI tools are also reported to be completely ineffective in spite of their price tag. A joint study by The Markup and Wired found that out of 23,631 predictions generated by Geolitica in 2018 for the police department in Plainfield, New Jersey, fewer than 100 lined up with a crime in the predicted category. An 2023 audit report by the New York City Comptroller’s office found that only 8 to 20 percent of alerts from ShotSpotter, a gunshot activity sensor that the New York City Police Department has used since 2015, actually matched with real shootings during the sampled periods. A spokesperson at SoundThinking, the company behind the technology, disputed such findings with Salon, citing an independently verified “97 percent accuracy rate” — a number based on the frequency of errors that police actually report — and argued that the absence of physical evidence doesn’t prove gunfire didn’t occur, though it also doesn’t give additional proof that it did actually occur.
ShotSpotter technology, the spokesperson added, ensures that police are present “where gun violence is most concentrated” and can provide a “timely response to life-threatening emergencies,” saving an estimated 85 lives annually based on an analysis by the University of Chicago Crime Lab.
SoundThinking’s arguments and the results of independent studies it has cited diverge from other findings, such as those from the Chicago Office of Inspector General, which reported that more than 90 percent of alerts resulted in no evidence of a gun-related crime, or the Journal of Urban Health, which in 2021 reported no significant reduction in actual gunshot fatalities in area where ShotSpotter has been used.
Officers potentially wasting thousands of hours in pursuit of car backfires and construction noise mislabeled as gunfire has not deterred the New York Police Department from continuing to use it. From 2015 to 2025, they spent $54 million to maintain it; at the beginning of last year, they signed three-year extension to the tune of $22 million. At the time, then-Mayor Eric Adams insisted that the tool was essential for public safety and that the Comptroller’s team didn’t “understand how the ShotSpotter operates.”
New York City Council Member Tiffany Cában, a longtime abolitionist who lost a close race for Queens District Attorney in 2019, is personally familiar with the use of “public safety” as both a pro-police slogan and a justification for using controversial AI tools.
Start your day with essential news from Salon.
Sign up for our free morning newsletter, Crash Course.
“In a political culture where the idea of public safety is a popular tool in elections and governance, you have a lot people then using technologies to make their donors happy and give a semblance of safety, without being that invested in the actual production of safety,” Cában told Salon. “And that goes back to those different poor outcomes that we were talking about, like ShotSpotter. All over the United States, ShotSpotter surveillance is primarily concentrated in low-income communities of color. When police are sent over and over again into communities for no reason and on high alert expecting a potentially dangerous situation, and then you connect that to the epidemic we have of tragic police shootings in our country, it’s just is a recipe for disaster.”
Technology companies, brushing off those criticisms, have taken great pains to boost AI as a harbinger of modernity and efficiency, regardless of its actual efficacy; many police departments are just as eager to adopt AI so they can claim those labels for themselves. One surveillance company, Flock Safety, catering to exactly this sentiment, described its tool as “AI providing steroids or creating superhuman capabilities” for crime analysts.
According to critics, this symbiosis creates a scientific veneer for state violence, allowing law enforcement to brand old tactics of containment and harassment as “precise” and “data-driven.” When called to testify before an oversight board, a police official can deflect accusations of targeting specific racial groups or poorer neighborhoods, claiming that they are merely following the objective dictates of a machine. Even when those tools are exposed as failures, they can serve as an aesthetic of innovation and omnipresence, and also as a means for police to stay and look busy, even if they’re chasing the wrong people.
AI companies are making millions of dollars from the ballooning demand. Entire startups like Flock Safety, now a $7.5 billion business, are centered on providing mass surveillance tools for police departments and private companies alike — tools that have been criticized as primarily defending the rich against the poor.
Despite their eager embrace of AI technology as a whole, police departments remain opaque about many specific acquisitions, often shielding their contracts with private capital from public scrutiny. In previous years, the New York City Council passed legislation requiring the NYPD to publish Impact and Use Policies for its surveillance arsenal; according to Cában, the NYPD has dragged its heels and released information too vague in wording to be of any use to her colleagues or members of the Civilian Complaint Review Board. The result has been an arms race between a city council passing more legislation to strengthen oversight and an NYPD that tries everything in its power to dodge.
We need your help to stay independent
“When you look at any particular piece of technology, without strong evidence-based answers to the questions that I laid out, you end up with local governments using those technologies to deepen injustices while lighting a lot of money on fire, like we have done in the past, quite frankly,” said Cában. She emphasized the false positive rate of different tools; the existence of independent audits on race, gender and socioeconomic biases; the impact of technology on procedural, due process, and equal protection rights; what data sources the technology is relying on; the mechanisms used for public transparency; and the impacts on privacy as critical questions that have still gone largely unanswered.
It’s probable that there are some answers that police departments aren’t sure of themselves. Most AI systems are “black boxes,” an entity whose internal workings — training data, algorithms, etc. — are both too complicated and too well-concealed for anyone outside the company leadership to understand. This barrier, critics say, creates a fundamental conflict between corporate intellectual property rights and a citizen’s rights to privacy and due process. Sensitive personal information and even the determination of guilt or innocence is no longer solely the purview of judge and jury, but partially outsourced to private entities whose obligation is to shareholders, not the public trust.
“What many institutions, not just government but also universities and schools, don’t realize is that the companies can essentially use them to train their data for other sales that they want to make down the line,” Andrew Guthrie Ferguson, a law professor at George Washington University, told Salon. “Whether that happens or not depends on whether the government lawyers writing those contracts between the police department and the company are aware of the risk. Or, if they are aware of the risk, how much they’re willing to accept a cheaper contract with fewer data barriers.”
Some AI technology companies and police departments explained to Salon the measures they had taken to ensure appropriate guardrails to protect personal data and mitigate any potential bias in AI output — partially in response to political backlash and embarrassing mishaps.
Flock Safety told Salon in a statement that “each customer that uses Flock fully owns and controls 100% of its data … Flock NEVER sells data.” However, critics contend that “never selling data” is a low bar when the company is effectively selling the public to the police.
The police department in St. Paul, Minnesota, meanwhile, requires that subject matter experts must verify that any work generated by AI is “accurate, complete, appropriate, not biased, not a violation of any other individual or entity’s intellectual property or privacy, and consistent with St. Paul’s policies and applicable laws.”
This policy, of course, relies on the assumption that the human operator can effectively audit a machine whose internal logic is often invisible. And always looming precariously is the oft-cited “automation bias,” a psychological phenomenon where humans, under pressure to be efficient, might default to trusting the computer output as objective over their own instincts — instincts that, in any case, are probably shaped by the exact same data and institutional purpose that shapes the AI’s thinking.
To some critics like Cában, the problem is larger than something that can be fixed with more eyes on the machine. The very idea that complex social problems can be solved by advanced technology, by punitive enforcement, or by punitive enforcement equipped with advanced technology, they argue, is a false, expensive promise that cannibalizes resources for health care, affordable housing, education and other solutions that might produce better results for public safety. With such measures, an AI surveillance regime might be revealed not as modern, but hopelessly obsolete.
Read more
about technology and policing