At Kenwood High School in Baltimore County, Maryland, a 16-year-old student was handcuffed at gunpoint after an AI-powered security system misidentified a crumpled bag of Doritos as a firearm. The student, Taki Allen, had pocketed the empty chip bag while waiting after football practice. Within minutes, multiple police vehicles arrived, and officers handcuffed him before realizing there was no weapon.
The system, developed by Omnilert, is designed to detect potential firearms through security cameras and alert school officials and law enforcement. But this incident illustrates a simple truth: AI is still not the infallible, hyper-intelligent problem-solver Hollywood loves to portray. The image of Allen holding the chip bag triggered a high-stakes response because the AI interpreted the posture and object as a threat. It’s a false positive with very real consequences.
Baltimore County Public Schools offered counseling and emphasized a review of the system’s use. Critics say the incident is a cautionary tale: the more we outsource judgment to algorithms, the more vulnerable we are to the mistakes those algorithms make. AI can scan, flag, and analyze at speeds humans cannot, but it still lacks context, nuance, and the human intuition necessary to differentiate between chips and guns, mischief and menace.
In an era increasingly dependent on AI for safety, convenience, and decision-making, incidents like this reveal a tension: society expects movie-level intelligence, but reality is far messier. A crumpled snack bag can turn into a crisis, reminding us that technology is a tool, not a substitute for human judgment. The lesson for schools — and for society at large — is that while AI may watch, it cannot yet quite understand.