Stefan Harms, a resident in the department of anesthesiology at the University of Manitoba in Winnipeg, believes that anesthesiologists could use better help. One part of the answer, he’s convinced, is better anesthesia software — programs that will monitor and record patient data, conduct real-time modeling of the effect of different drugs, and even directly control the infusion of those drugs. Yes, there are anesthesia machines and software packages that address some of these jobs, says Harms, but they are either too expensive or they don’t do everything Harms wants. Sure, if you’re a hospital with $60,000 to fling about, you can buy a state-of-the art Narkomed 6000 — but many hospitals are on a tighter budget.
When he isn’t in the operating room taking care of patients, Harms is hacking on the five computers in his basement. And he thinks he knows how to achieve his dream of low-cost, reliable anesthesia software — by going the open-source route. Last year, Harms founded LAMDI, the Linux Anesthesia Modular Device Interface. Harms thinks that the open-source software development model, in which the source code to a program is made freely available to the general public for redistribution and modification, offers fruitful possibilities for addressing anesthesiological software needs.
Harms is placing his bets on a central tenet of open-source ideology — the belief that freely available source code encourages a “peer-review” process that produces software that is less buggy and more reliable than proprietary “closed-source” software. The theory is that when everyone can hack on the code, fix problems as they find them, and add their own new features, the code quickly improves. It has worked for the Linux-based operating system, says Harms. Why shouldn’t it work for anesthesiology software, where avoiding crashes and bugs is a life-or-death situation?
“There is a compelling argument for open source just for safety reasons,” says Harms. “If you use tools and software that are not peer-reviewed, you should be more liable if something goes wrong.”
But is the anarchic open-source world really appropriate for the operating room? Who would be liable if the software crashed and the patient died? Some hacker grad student at MIT or Stanford? The anesthesiologist? The hospital? Can free software pay the price of patient mortality?
Lawyers, doctors, and programmers are far from united in their answers to these questions. The problem of software product liability is a hot-button issue for all software developers, whether closed or open source. Commercial software publishers, right now, are pushing for nationwide legislation that would absolve developers of any contractual responsibility for a mishap caused by their software. At the same time, government agencies such as the United States Food and Drug Administration take a very hard look at software that is used in medical procedures. It’s all very well to download Linux from the Net and install it on your Web server. But it’s a completely different matter to get the latest pharmacokinetic drug-infusion software up and running in the operating room.
The obstacles faced by Harms in his quest for open-source anesthesia software suggest that there are some serious potential limits for the open-source software model. The experience of some medical programmers who have placed their code in the public domain indicates that open source is certainly no panacea for the problems faced by medical practitioners. But there are still some intriguing possibilities. If liability issues can be addressed, and if the peer-review component embedded in open-source software can be proven to result in pragmatically better software, then, suggest some open-source enthusiasts, wouldn’t it be our duty to proceed down the open-source road?
Some public domain anesthetic software already does exist. Harm’s LAMDI site links to one program, STANPUMP, that’s been around for almost a decade. STANPUMP is labeled as a tool for “pharmacokinetic modeling and control.” In layman’s terms, says STANPUMP author Steven Shafer, an anesthesiologist at Stanford, that means “computer controlled drug delivery.”
Shafer includes a disclaimer with his program that states “administration of anesthesia using STANPUMP is your responsibility, not mine.” But that’s not quite enough. He has also been forced to explicitly label the program as “experimental” — not approved for use with humans. Who forced him? The FDA.
“STANPUMP caused me considerable scrutiny by the FDA several years ago,” says Shafer.
The problem, says Shafer, is that under FDA guidelines, STANPUMP qualifies as a “medical device” that must be licensed and approved by the FDA.
“I was liable for potential fines of several thousand dollars per distributed copy of the program,” says Shafer. “At that point several thousand copies were in the hands of anesthesiologists, although few were using the device for clinical care, but clearly the regulatory issues alone opened me up for potential penalties in the millions of dollars … The compromise I reached with the FDA was that as long as I clearly listed on the program that STANPUMP was experimental, and not approved for human use, I would be in compliance with the intent of the regulations.”
“I hate to throw any cold water on their [LAMDI's] enthusiasm, because one never knows where things will lead. However, unless they are exceedingly committed in their efforts, I fear that there are some obstacles that they may have trouble overcoming,” says Shafer. “It would cost me literally millions of dollars to obtain FDA approval for the program, a cost that would never be born by any potential revenue from the program. Thus, STANPUMP has no potential to meet the software needs of the anesthesia community … Thus, FDA regulations alone appear to foreclose the viability of open-source software.”
Shafer sees other problems besides the FDA. Operating systems like Linux benefit from the open-source model because there are hundreds of thousands of developers willing and eager to hammer on each new version of the code, looking for bugs.
“How many programmers are going to be excited about anesthetic drug delivery,” wonders Shafer, “and in a position to test and debug open-source versions of the software? In the 10 years that STANPUMP has been in the public domain, about 10 individuals have actually taken the code and incorporated it into other programs, or used it to vet their own computer-controlled drug delivery programs. Of these, probably only two or three actually had the mathematical background to actually understand the pharmacokinetic routines.”
Finally, notes Shafer, one simply can’t depend on grass roots energy when working with patients in life-threatening situations.
“There are reasons that the FDA wants software to be vetted by its regulatory process,” says Shafer. “There is a non-zero chance that my STANPUMP program might fail at some critical moment, and thus expose a patient to a grave risk. There are good programmers who test the hell out of everything, and there are lazy programmers who may distribute changes in the software that have never been tested. It is one thing to distribute Linux to several thousand people running WWW servers — in the worst case several thousand WWW servers crash. It is quite another thing to distribute a new version of medical software, and suddenly several thousand balloon pumps stop working, resulting in death in hundreds of patients. A company simply has no alternative to very thoroughly testing every release of their software. However, the author of an open-source program may be trying to get a patch released between cases in the operating room — with dire implications for the user.”
The question is not hypothetical. Peter Kohn, a lawyer at a Philadelphia law firm that specializes in medical malpractice and product liability suits, says his firm is currently representing a client in the case of a woman who died while under anesthesia in the operating room. According to Kohn, the nurse-anesthetist who was administering the anesthesia is claiming that an anesthesia machine that was supposed to monitor the patient’s blood oxygenation level failed to sound an alarm when the level got too low. Before the nurse noticed the problem, the patient had already suffered brain death.
Leonard Fodera, the lawyer handling the anesthesia case, says there is no evidence that the machine, or the machine’s software, was at fault. In his opinion, doctors often tend to unfairly blame mishaps on machine failure. But he did concede the possibility that a lethal software or mechanical failure could occur — and he argues that in such cases the manufacturer or software publisher should be found at fault.
But what about the theory that open-source developers should be less liable for damages because of the very nature of the open-source development model? Eric Raymond, open-source evangelist extraordinaire, raises the possibility that “in the future, there will be a wave of lawsuits against closed-source products on the grounds that the vendors failed to follow industry’s best practice for reliability.” In other words, the marketplace might consider software vendors more liable for the bugs and failures of their products if they refused to allow the general public to look at their source code.
But just how liable are software vendors for problems with their software in the first place? Intriguingly, both closed- and open-source software developers make nearly identical disclaimers: They take no responsibility whatsoever for potential software mishaps. The standard license for a commercial piece of software — Microsoft Word, for example — specifically declares that the publisher isn’t liable for anything that could conceivably go wrong. The GNU General Public License — which protects such software as Linux — contains similar language: a “disclaimer of warranty” warning that the authors of GPL-protected software are not responsible for any defects in the software.
Would these clauses stand up in court? Right now, the answer varies from state to state. Many states currently have laws that nullify sweeping warranty disclaimers in contract language — but there are few examples in which software publishers have been taken to court — especially open-source software authors.
For some critics, however, liability problems are an obvious weak spot in the open-source development model. One Southern California insurance company is pitching insurance policies to open-source developers based on the argument that they need extra protection. Ironically, commercial software publishers like Microsoft have a secret weapon that could end up benefiting all software developers. They are vigorously pushing a proposal called the Uniform Computer Information Transactions Act, or UCITA, that will, according to consumer activists and product liability lawyers, legalize liability disclaimers in software licenses, thereby helping protect publishers from consumer lawsuits.
In July, the National Conference of Commissioners on Uniform State Laws voted to “promulgate” UCITA. The next step is for each individual state legislature to consider the proposal. UCITA critics are howling.
“People are absolutely beside themselves concerning the changes being made,” says Kohn. “The changes are very anti-consumer, and they really remove incentives for manufacturers and publishers to remove problems and ensure safety.”
Anti-UCITA campaigner Cem Kaner also predicts that passage of UCITA will lead to inferior software. Kaner says UCITA will shift liability from software publishers and authors to the distributors and contractors who might install or service software. A consultant who specializes in Linux installations, then, would be liable for any harmful bugs in Linux, but the actual authors of the code itself would not be considered responsible. In the open-source world, argues Kaner, consultants and contractors tend not to be able to afford appropriate legal advice or liability insurance. UCITA, argues Kaner, might therefore act as damper on the entire open-source movement.
UCITA proponents argue exactly the opposite — they claim that since UCITA would legitimize open-source licenses, open-source developers would enjoy the security to keep hacking without worrying about liability issues. But no one really knows for sure what effect UCITA might have on open-source software — there simply isn’t yet any legal precedent that might hint at how courts will rule on open-source legal issues.
In any event, UCITA would have no direct effect on Stefan Harms and his open-source anesthesia project. He’s in Canada, subject to a whole different set of laws and regulations. But that’s not to say he doesn’t have to worry about liability issues. When the matter at hand involves drugging a patient into a unconscious state in preparation for life-threatening surgery, the question of who will be held accountable when something goes wrong becomes paramount.
Harms doesn’t ignore the problem of liability. Even though he remains convinced that open-source software, in principle, should ultimately produce safer software, he acknowledges that there is a clear need for extensive testing. In his view, that’s exactly the roll that government agencies should play.
“I potentially see a need for some kind of government body to help officially check the software, to give assurance that a piece of software will do what it says it will do,” says Harms.
But that would still require someone to pay for the process. FDA approval doesn’t come cheap, as Shafer notes. That’s one reason why current “medical devices” used in anesthesiology — hardware or software — are so expensive.
Many, if not most programmers who work in the open-source world tend to look askance at any government involvement in the software industry. But there are some activists who argue that the government could provide a valuable public service by taking an activist pro-open-source position. Rigorous testing could be one element of such government action.
In any case, Harms hopes that liability fears aren’t allowed to squash progress in open-source medical software.
“In some ways the liability issue sort of slows it down,” says Harms, a native of South Africa. “Maybe rightly so, but if it completely blocks it, I’m going to say, OK, I’ll go back to Africa and develop it there. I do believe in a formalized process, but on the other hand we have to balance being so cautious that we are not doing anything at all.”