In the wake of these killings, we have been told, over and over again, that police officers must have the flexibility to make “split-second decisions” to protect the public and themselves. This language has become so commonplace that we would be forgiven for assuming that we have always used it to explain why law enforcement has killed scores of unarmed minorities over the years. After all, the underlying claim rings true: In the line of duty, police officers unquestionably face abrupt dangers that require rapid response.
Law enforcement agencies rely heavily and indiscriminately on this idea, that officers must act with “split-second” timing, to explain police conduct in an enormous range of circumstances. But as it turns out, this may have little to do with the facts on the ground and everything to do with covering their legal bases.
The “split-second” rationale gained traction among the lower courts in the 1960s and vaulted into the public discourse with the landmark 1989 Supreme Court decision, Graham v. Connor, which remains the bedrock of state and federal use-of-force laws. In that case, with these important words, the court justified the onerous “objective reasonableness” standard that makes it almost impossible to hold officers liable for their conduct:
“The calculus of reasonableness must embody allowance for the fact that police officers are often forced to make split-second judgments—in circumstances that are tense, uncertain, and rapidly evolving—about the amount of force that is necessary in a particular situation.”
Graham did not mark the first time members of the Supreme Court expressly cited the “split-second” nature of police decision-making to justify questionable police conduct. The phrase actually made a racially charged appearance over 20 years earlier in the 1963 case of Wong Sun v. United States.
There, the Warren Court majority ruled in part for the suppression of evidence illegally obtained by the police, because no probable cause existed to support officers’ decision to follow a Chinese man and arrest him. But in a blistering dissent joined by three other justices, Justice Tom Clark criticized the court for creating a “Chinese puzzle” out of a simple case and thereby overburdening officers whose jobs necessitated “split-second” decision-making. Dismissing the majority’s tacit disapproval of the officers’ racially motivated decision to tail an otherwise inconspicuous person of Chinese descent, Justice Clark retorted, “To me it was efficient police work by officers familiar with San Francisco and the habits and practices of its Chinese-American inhabitants.”
Tone-deaf by modern standards, Justice Clark’s statements also lay bare the uncomfortable links between supposedly instantaneous police decision-making and racial profiling. These are links now supported by mountains of data. One widely cited study by researchers at the University of North Carolina used a video game simulation to demonstrate that people tend to pull the trigger faster on black targets than on white targets, even when the target is unarmed. The statistics bear out this bias in the law enforcement context: An analysis of federal data on 1,217 fatal police shootings from 2010 to 2012 found that young black males, age 15 to 19, were 21 times more likely to be shot dead than their white peers.
Yet the fact that law enforcement’s “split-second” decisions take a radically disproportionate toll on black Americans has had little effect on how we write the laws that afford police such great discretion in meting out force. Instead, even outside the courts, the “split-second” rationale has been deployed to defeat legislative attempts to constrain police judgment. Consider the backlash New York authorities faced in 1965 after the enactment of a state law that newly limited officers to using deadly force only where the suspects were believed to have committed serious felonies. The Patrolmen's Benevolent Association, New York City’s largest police labor union, passed a unanimous resolution criticizing the law for, among other things, “impos[ing] unrealistic requirements of split-second legalistic judgment and thereby embolden[ing] criminals.” In response to the pressure, the law was amended again three years later to allow police to use force in a broad range of circumstances, such as to prevent trespass or damage to property.
Viewed in light of this history, it is unsurprising that New York Attorney General Eric Schneiderman came under immediate fire from police unions and district attorneys on Monday upon announcing that he was recommending new laws and requesting from Gov. Andrew Cuomo the power to investigate killings by police. What we are seeing is a rerun.
It’s worth noting that before the 1960s, courts employed the term “split-second” almost exclusively to describe the experience of vehicle drivers in the moment before collision. A comprehensive search of hundreds of federal and state court decisions reveals that this began to change around the time the Supreme Court handed down Wong Sun and other major decisions restricting evidence admissibility. It may be that the Warren Court’s expansion of prohibitions on the use of evidence derived from unlawful searches and seizures increased the lower courts’ reluctance to rule police conduct unlawful — culminating in Graham, handed down well after the end of the Warren court's progressive reign.
Whatever the reasons for the rise of the “split-second” police paradigm, one thing is clear: Formerly a phrase used to connote the helplessness of car accident victims, the “split-second” has become the courts’ way of signaling their reluctance to question law enforcement’s most systematically misguided judgment calls. In so doing, the courts have abdicated their responsibility to articulate meaningful constitutional limitations on the use of force against civilians. The shameful legacy of that abdication is a legacy of unarmed, dead black men.