Who will go nuts?

Predicting mental illness is usually no better than gambling, but we keep trying.

Topics: Mental Illness,

Maybe it’s because we’re all on edge, spending time with our families during the holidays. Or maybe it’s the approaching millennium, but I’ve been getting questions like these lately: “My son’s acting weird. What are the chances that he’s becoming schizophrenic?” “I have a mother who has breast cancer. What’s the likelihood that I’ll get it?” and “What are the odds that my husband caught herpes from a toilet seat?”

Whether evaluating a patient with a family history of cancer or elevated cholesterol level; or passing judgment on a violent rapist scheduled for possible parole, doctors are being asked to become scientific clairvoyants. Medicine has become the highest-stakes game in town, and we physicians are the de facto oddsmakers.

As a passionate poker player, I am aware of the vagaries of predictions. A few weeks ago, playing Texas Holdem in a local casino, I called a final bet, knowing that there was only one hand I could beat. Fortunately, that’s the very hand my opponent had. While I scooped up the chips, everyone laughed. Some murmured reluctant praise, others groused that my call was stupid, an utter fluke. I shrugged in a fabulous display of false modesty. On the way home, my feeling of smugness lapsed into doubt. Had my read of the other player been an informed hunch, a gut feeling, innate talent or pure luck? To really know, I would need to have available for statistical analysis all the prior poker decisions I’d ever made, an impossible task given my gambler’s version of a selective memory. Now the smugness was really gone. It was pure folly to think that I could tell the difference between skill and luck based on a single experience.

I went home, counted my winnings and told myself that the cash was empirical evidence. But I knew better.

The morning following the poker game, a journalist friend phoned. She wanted to know if there were psychiatric tests that might detect potentially suicidal pilots. My first response was, “Are you kidding? Human behavior isn’t that consistent.” She continued, “Certainly there must be some traits that would be sufficiently alarming to ground a pilot before he got into trouble?”

“Wishful thinking,” I said, before agreeing to do some homework.



Medicine has the same problem with predictions as my poker playing. We mentally tabulate our past experience and, from this inaccurate, selective database, make all sorts of serious, life-and-death decisions. Sometimes the predictions pan out. Sometimes they don’t, particularly in the vaguest of all specialties, psychiatry. So, what is known about psychiatric predictions?

(A brief caveat — I have several psychiatrist friends whom I respect. The following is not meant as a blanket condemnation. Besides, they can easily retaliate by asking how much good a neurologist does, a point with which I would readily agree.)

Let’s start with the classic Rosenhan study. In 1973 D.L. Rosenhan, a professor of psychology and law at Stanford University, felt strongly that psychiatric diagnoses were in the minds of the observers. To test his hypothesis, he had eight normal, sane volunteers admitted to 12 different mental hospitals. The eight included a psychology graduate student, psychologists, a pediatrician, a painter and a housewife. Each was instructed to show up at the admissions office of the psychiatric institution and say that he was hearing voices. The voices were described as unclear, but seemed to be saying, “empty” and “hollow” and “thud.”

Rosenhan chose these symptoms because of their similarity to existential symptoms experienced by normal people. The patients altered their names and vocations, but otherwise gave accurate renditions of their present circumstances and past psychological history. They had no other psychiatric symptoms.

Seven of the eight were diagnosed as schizophrenic and hospitalized for an average of 19 days. (The other patient was diagnosed as manic-depressive). They stopped feigning the hearing of voices shortly after admission. During their hospitalization, none were found out and all were discharged with a diagnosis of schizophrenia in remission. One third of the ward patients questioned felt certain, or at least suspected, that the pseudo-patients were sane. The staff raised no such doubts.

Of course there were the usual protestations, criticisms, and general article nit-picking, but the point was obvious. Evidence-based psychiatry was just emerging, challenging the more traditional anecdotal case-history approach. Shrinks still believed that they could stroke their beards and exclaim, “In my experience …” (Can you imagine what would happen to psychics if they had to publish statistical analysis of their crystal-ball utterances)?

About this time I had a singularly unnerving medical experience. After my residency, I became the neurology consultant for a nearby state mental hospital. I interviewed a teenage mother incarcerated for having shaken her young daughter to death when she wouldn’t stop crying. During a subsequent evaluation on the psych ward at County Hospital, she had tried to choke a weeping, demented 90-year-old wheelchair-bound woman.

Sitting across the consultation desk from the attractive, fresh-faced mother (two burly guards stood alongside), I soon realized that I had no idea what, if anything, was wrong with her. She could have been a poster girl for the 4-H Club. Nothing was obvious; her demeanor, speech, neurological history and exam were all normal. After we finished, I asked her why she’d tried to choke the old woman. The patient answered dryly, without a hint of emotion, “I hate the sound of crying.”

I still remember her staring at me as though she, too, were puzzled by her behavior. No one, including herself, understood her. Sane? Insane? Cunning? I had no idea, nor even a clue as to how to make such a determination. I knew that the psychiatrists were more skilled than I, but to what degree? I had the sinking feeling that accuracy of prediction would be an afterthought in the way that a cloud is a rain cloud only after it starts to dump rain. I thought of the Rosenhan study and was glad I was not the presiding judge.

Three psychiatrists judged the woman mentally incompetent to stand trial; she was shipped to the state facility for the criminally insane. She would receive the usual treatment, until one day a court-appointed psychiatrist would be put to the test — “Is she still dangerous?”

I do not know what happened to the woman. (She was transferred from hospital to hospital until I lost track of her.) But here are some sobering statistics on psychiatrists’ ability to pick out subsequent violent patients. I do not offer them as specific criticism of psychiatry, but rather as a not-so-palatable dose of realism. It’s time we take a hard look at the limits of assessment of future behavior.

A study at a New York psychiatric hospital, published last month, analyzed the ability of the treating psychiatrists to predict who, among 183 male patients, were likely to show assaultive behavior during the following three-month period. Their accuracy rate was 71 percent; 29 percent of future violent patients were not identified.

In a 1993 study, members of the department of psychiatry, University of Pittsburgh School of Medicine, evaluated patients originally seen in the emergency department of a metropolitan psychiatric hospital. When ready for discharge, the patients were assessed for their potential for violence and accordingly assigned to one of two groups — violent or nonviolent. In a six-month follow up, violent acts occurred in half of the cases predicted to be violent, but also in over one-third of the “nonviolent” group. Within this group, predictions of female patients’ violence were not better than chance (flipping a coin was equal to a psychiatric evaluation).

The same authors then took the same group of patients and compared the predictive capabilities of the examining psychiatrists to actuarially derived data gathered from the patients’ histories. Computer-assessment was found to be more accurate than assessments by the treating clinicians.

In another study, such intuitively obvious differences as verbal threats versus actual prior physically assaultive behavior haven’t been shown to accurately predict subsequent violent behavior.

Why do clinicians consistently miss a third of subsequently violent patients? In large part, the answer lies in the huge overlap between normal and abnormal behavior. There are no clear cut-offs between the truly dangerous vs. those posing only idle threats. Trying to improve the sensitivity of our predictions would necessarily erroneously categorize a major percentage of psychiatric patients. I can’t help wondering how Van Gogh would have been classified after he cut off his ear. Would gentle Vincent, who never hurt a sunflower, have been classified as dangerous?

And what are we to do with the concept of the final straw, the act of violence hinging upon something as non-specific as the sound of a loud TV, or conversely, interrupted by a calming fragrance or a flash of consoling memory? We want precise predictions to perpetuate the illusion of a world of order. We cannot include the final straw in our predictions because we cannot know in advance what the final straw will be. That involves yet another set of equally arbitrary assumptions. The young woman who tried to choke the old lady — would that set of exact circumstances occur again, and was that exact situation necessary to trigger her attack?

We tremble at the thought of releasing potentially violent mental patients into the community. We demand screening more accurate than seems possible. None of us are happy with a one-third error rate, but even if we could do better, where would we draw the line — 90 percent? 95 percent? 99 percent? The “lock ‘em up and throw the key away” folks would choose a higher number than the “give him another chance” contingency, but is there any number where the two groups might reach a compromise? What percentage guarantee of nonviolence would you demand before accepting a former rapist in your neighborhood? A pedophile? A murderer? And, would the number be lower if it were someone else’s neighborhood?

These predictions have enormous consequences; whether or not a patient is released from a hospital or jail, or conversely, confined against his wishes. To give the psychiatrists their due, they may underestimate the tendency toward violence out of a wish not to falsely accuse, and also to maintain the belief that we are less violent as a people than the statistics suggest.

Are we at the limits of what we can predict? I asked a professor of psychiatry at the University of California. Her answer: We can categorize high vs. low risk. And the statistics do bear this out. If the psychiatrists were giving recommendations on Sunday football games, a 70 percent accuracy rate would be astounding. Ditto for the stock market, or anything financial. I would be thrilled to read my poker opponents correctly 70 percent of the time.

But when lives hang in the balance, it’s a different story. Do we wish to give up the possibility of redemption if it means living with a certain degree of fear and apprehension? If we declare the once-violent criminal/patient’s behavior as immutable, beyond rehabilitation, doesn’t this dampen our own feeble illusions and hopes for change?

Psychiatrists seem equally baffled by predicting suicide. A study at University of Iowa hospitals and clinics looked at 1900 patients with major depression. The psychiatrists tried to predict those who would subsequently commit suicide; they failed to identify any of the subsequent suicide victims. The study concluded, “It is not possible to predict suicide, even among a high-risk group of inpatients.” A similar study in France concluded, “It is impossible to describe a specific picture of the depressed suicidal patient, and clinical scales to estimate suicide risk are of limited interest.”

Regular people are no better than research scientists. Paul Ekman, psychologist at UC San Francisco has prepared a video tape of interviews of 10 men telling their views on capital punishment. The viewer is asked to decide which of the men is lying. Most people score at chance levels or only slightly higher. Those you think would be better than average — police officers, trial court judges, FBI and CIA agents — did little better than randomly selected bus drivers or pipe-fitters. (A review of 120 similar studies revealed only two that reported lie-detection rates of 70 percent.)

Perhaps the most damning evidence of our own judgment skills is the post-mass murder interview of the neighbors. “I don’t believe he could have done it. He seemed well-behaved, polite and had the cutest little dog.” It’s time to stop the feigned surprise when our next-door neighbor is found with bodies under the floorboards. We must grow up, learn to accept that human behavior is chaotic, no more predictable than thunderstorms, hurricanes or earthquakes. Psychiatrists can assess risks; they cannot make black-and-white predictions.

Remember that when you sit on a jury, wonder about the EgyptAir pilot, O.J. Simpson, the Clinton-Lewinsky affair, or why Ted Kaczynski’s explosive personality wasn’t recognized in advance? Judging human nature is a crap shoot, not a science, and is unlikely to change. That much I can say with certainty.

Robert Burton, M.D., is the former chief of neurology at Mount Zion-UCSF Hospital and the author of "On Being Certain: Believing You Are Right Even When You're Not." His column, "Mind Reader," appears regularly in Salon.

More Related Stories

Featured Slide Shows

  • Share on Twitter
  • Share on Facebook
  • 1 of 26
  • Close
  • Fullscreen
  • Thumbnails

Comments

0 Comments

Comment Preview

Your name will appear as username ( settings | log out )

You may use these HTML tags and attributes: <a href=""> <b> <em> <strong> <i> <blockquote>