Andrea Yates was, in the words of a psychiatrist who treated her for one episode of postpartum depression with psychosis, one of the five sickest patients the psychiatrist had ever treated. She neglected to bathe or to drink water, she acted bizarrely and she suffered from severe delusions and hallucinations that experts say made her a clear danger to herself and others. But just days before she drowned her five children in a bathtub, Andrea Yates’ doctor decided to take her off her antipsychotic medications — a mistake, in hindsight, that may have cost her children their lives.
What if we took humans out of the equation? What if a dispassionate computer had been making the decisions about Yates’ care, rather than a human doctor sitting across a desk from her? Would a computer have left her on medication? Would her children still be alive?
Three years ago, the National Institute of Medicine estimated that medical mistakes kill between 44,000 and 98,000 people per year in hospitals. If that’s true, it means medical errors kill more people each year than car accidents, breast cancer or AIDS. Efforts to reduce such errors get only a fraction of the attention — and funding — that goes to AIDS research, but arguably it would be far easier to substantially reduce life-threatening medical errors than it has been to create an AIDS vaccine.
While some researchers have successfully poked some holes in the estimates of the number of patients killed by errors, the real effort should be aimed at ways to change the practice of medicine to make it safer. One doctor declaiming loudly that something should and can be done to reduce errors is Atul Gawande, a surgical resident in Boston, a staff writer for the New Yorker and now the author of a new book, “Complications: A Surgeon’s Notes on an Imperfect Science.”
Gawande is arguably the best nonfiction doctor-writer around; his talents are a source of envy among the rest of us, and this collection showcases his work well. He’s prescient and thoughtful, in awe of the medicine he practices without being an unthinking cheerleader. He is able to enter a story, but never overstates his own role as some doctor-writers are wont to do. The title of Gawande’s book recalls the title of author and scientist Lewis Thomas’ essay collection “The Youngest Science: Notes of a Medicine Watcher.” That’s apt, because Gawande’s work is well on its way to becoming the heir to Thomas’ humble, insightful and brilliantly crafted oeuvre.
Gawande’s greatest contribution, however, is that he has no fear of fighting the myth of the infallibility of doctors. At times, machines can best man. In 1996, attempting to diagnose heart attacks, a Swedish cardiologist read 2,240 electrocardiograms (EKGs) — the squiggly lines that show the electrical rhythms running through the heart. Of 1,120 heart attack readings, the cardiologist picked up 620. A computer, reading the same 2,240 EKGs, found 738. Neither was perfect, but the computer had won. Deep Blue had defeated Gary Kasparov.
Computer-based diagnostic systems have been in existence some 30 years, and more than a decade ago a philosopher suggested that “diagnosis without doctors” would be an improvement over human-based systems. For the most part, however, the medical community has seized on the limitations, not the promise, of such systems. The last major study, in 1994, found that they made correct diagnoses only about half to three-quarters of the time, making them suitable only for teaching medical students how to diagnose hypothetical patients.
It turns out, though, that doctors may have been too quick to reject computerized diagnostics across the board. Hospitals have found that computerized systems are invaluable when used to help make highly specialized decisions such as which antibiotics to use in an intensive care unit and which patients with HIV should be on medicines to prevent deadly pneumonias. Machines are also at least as good as pathologists when it comes to reading Pap smears — and as the Swedish study showed, you may want a computer, not a cardiologist, to check your EKG.
So why not have computers decide which patients — such as Andrea Yates — who suffer from psychotic disorders or depression should be taking medication? If the evidence of her illness was as clear-cut as it seems, wouldn’t a dispassionate computer have left her on medication, instead of bowing to pressure from her or her husband?
Doctors, after all, make mistakes. The central message of Gawande’s book is that despite medicine’s great strides, it’s a fallible and human art often confused with a science. One of the most powerful essays here, “Education of a Knife,” which ran last February in the New Yorker, finds Gawande suspended between the need for doctors-in-training to practice and the needs of patients to have the best healthcare available. It’s an uncomfortable place to be, as Gawande acknowledges.
Gawande tells the story of his own reaction to a cardiology fellow who offered to treat Gawande’s son for a congenital condition that had been stabilized. Gawande turned him down in favor of a senior doctor, even though he knew that the fellow, as a resident, needed the experience. Another doctor, an advocate for asking patients to allow residents to treat them, admitted to Gawande that he and his wife had not allowed residents into their delivery room.
Gawande describes the morbidity and mortality conference, a brutal weekly dissection by his department’s surgeons of the week’s errors. His experiences are similar to those of every medical student, resident and faculty member who has ever sat through an M&M. I remember one irascible senior surgeon from my medical school faculty who used to analyze particularly error-laden cases by asking residents, “So, why didn’t you just take him out back and shoot him?” And when patients do die, doctors analyze their mistakes through autopsies, as Gawande relates in another essay — although they don’t do as many as they should, many experts say.
Dispassionate analysis of errors, in which blame is not assigned and the analysis is carried out by an outside agency, is key to turning the lessons of M&M’s and autopsies into better medical practice, Gawande writes. Just as the Federal Aviation Administration has used these principles to improve airline safety, the American Society of Anesthesiologists has been at the forefront of preventing medical errors. Other groups are coming on board the effort, although the prevailing culture of medicine — God complexes are not uncommon — has slowed progress.
There are any number of little things that could be done to dramatically improve patient safety. Making sure all doctors entered prescriptions into computers would allow a double check, preventing two similarly sounding medicines from being confused and stopping ubiquitous doctors’ bad handwriting from leading to incorrect prescriptions.
Still, even Gawande has some reservations about the wholesale mechanization of medicine. “Western medicine is dominated by a single imperative — the quest for machinelike perfection in the delivery of care,” Gawande writes in an essay in which he visits Shouldice Hospital, a “hernia factory” outside of Toronto. At Shouldice, named for a pioneering hernia surgeon, doctors do nothing but perform hernia repairs, all identical according to a standard protocol. But the experience left Gawande cold.
“Maybe machines can decide,” he writes, “but we still need doctors to heal.”
He’s right, because healing takes more than diagnosis and treatment. Andrea Yates and her husband Rusty, however, were hardly ideal patients. Dr. Park Dietz, a psychiatrist who testified for the prosecution in the Yates case, told the court that the couple had repeatedly ignored medical advice in the past. When doctors had told her to stay on her medications, avoid future pregnancies and undergo shock therapy, she refused. Patients with the disorders suffered by Andrea Yates are often in denial of their illnesses and are understandably loath to take medications that can have serious side effects. They often complain of feeling sluggish, both physically and mentally. In Yates’ case, she and her husband may have feared endangering future pregnancies.
A computer might have continued Yates’ prescription, but it could not have convinced her to actually take her medications. To do that, we need doctors, perhaps with a little help from machines. If anything, leaving certain analyses and decisions to computers could help doctors work on their bedside manners, which many patients say have declined as a result of managed care.