How Toms River cracked a cancer cluster

Chemical companies treated this N.J. town as a private dumping ground for decades. Here's how they were made to pay

Published March 12, 2013 12:00AM (EDT)

Excerpted from the book "Toms River: A Story of Science and Salvation"

By the spring of 1995, when Steve Jones called to ask him to look into a possible cluster of childhood cancer in Toms River, Michael Berry had been New Jersey’s chief cluster investigator for almost nine years. It was still just a part-time responsibility — Berry spent most of his time on other tasks at the state health department — but it was now the least enjoyable part of his job. One of the first “incidence analyses” Berry ever attempted was the 1986 study of childhood cancer in Toms River. Its ambiguous results turned out to be a harbinger of dozens of similarly unsatisfying cluster studies he undertook around the state—including another one about Toms River kids in 1991. “After a while, it got frustrating,” he recalled many years later. “I mean, what were we accomplishing?”

A cluster study in New Jersey was like one of those old-fashioned Hollywood movie backdrops that looked fairly impressive until you leaned on it and it toppled over. Berry’s frustrations ran much deeper than just the usual problems with the state cancer registry, which in 1995 was still in poor condition, its records incomplete and arriving three years late or longer. As he accumulated years on the job, Berry came to realize that even if the registry had been up-to-date and reliable, he would not have been able to tell callers what they really wanted to know: whether an environmental problem was causing cancer in their neighborhood. For reasons that were not easy to explain in a phone call, it was a question Berry could not answer. In fact, Berry spent almost as much time explaining the limitations of cluster studies as he did conducting them. Sometimes Berry felt his job was more about being a therapist to anxious callers—he got about thirty cluster calls per year—than investigating what they told him. “It was hard to believe I was really addressing anybody’s concerns,” he recalled.

Most of the people who called him had at least three major misunderstandings about cancer patterns, each of which led them to assume that all clusters had a hidden cause and that Michael Berry could unearth it—if only he tried hard enough. The first misunderstanding was the nature of clustering itself. As Berry knew, everything clustered to a degree, often for no reason other than chance. Nothing that was subject to the complexities of the natural world, whether birds in a flock or sick people in a city, was distributed evenly in space and time. Some clumping was inevitable. In cancer incidence studies, the challenge was not to find the clumps—that was usually pretty easy, thanks to the registry—but to identify which were likely to have an underlying cause other than randomness.

The second misunderstanding was about the ubiquity of cancer. In adults, it was a much more common condition than most people recognized. In the mid-1990s, there was one new case per year for every 230 New Jerseyans. A more striking way to think about that was that an American man faced a 44 percent chance of getting cancer at some point during his life; for women, the lifetime risk was 38 percent. With so many cases, it was inevitable that some neighborhoods would have surprisingly high concentrations of cancer—again, for no reason other than bad luck. “People just didn’t realize how much cancer there is all over,” Berry would later explain.

Finally, many of the people who called Berry to report a possible cluster assumed that cancer was a single disease instead of a catchall term applied to more than 150 distinct conditions. All cancers involved uncontrolled cell division triggered by genetic damage, but many had little else in common. Cervical cancer, for instance, was predominantly spread via sexual contact; including cervical cases in a residential cancer cluster study made little sense. On the other hand, focusing only on cancer types that had been plausibly linked to industrial chemicals—brain and blood cancers, for example—reduced the total number of cases in a cluster study and thus made it even harder to confidently identify nonrandom clusters. For a rare type of cancer, just one extra case in a neighborhood—raising the total from one to two cases, or from two to three—would be enough to make the neighborhood look like a hotspot, even though that one additional case could easily be coincidental.

By the time Berry finished clearing up those misconceptions about cancer and then moved on to the deficiencies of the state registry, with its out-of-date and incomplete records, many callers were so discouraged that they dropped their request for a cluster investigation. About half of the time, however, Berry’s explanations did not satisfy a caller. In those cases—perhaps fifteen times a year—Berry would take the next step and conduct an incidence analysis, using registry data. The 1986 and 1991 analyses he conducted on childhood cancer in Toms River were typical. The analyses were simple comparisons between the number of known cases in a community and the number that “should” have occurred there based on the average incidence rate for all of New Jersey.

What frustrated Berry about those analyses was that their only real scientific value was as a first pass, a preliminary screening tool. They were a way to identify communities worthy of more sophisticated investigations that might include air, water, and soil tests as well as interviews to determine residents’ past exposure to carcinogens. Yet his supervisors in the health department never authorized any follow-up work in neighborhoods, no matter what Berry had found initially. Identifying true pollution-induced clusters amid the sea of unlucky flukes, Berry discovered, was beyond the resources, expertise, and inclination of the State of New Jersey. If a community really did have significantly more cancer than expected—and over the years, Berry had found several communities that seemed to—he would confer with his supervisors, send a letter explaining his findings to the person who had asked for the study, and then . . . nothing. There was no next step, no follow-up. Just the letter explaining the anxiety-inducing results and reiterating what Berry had already told the caller in their first conversation: The cluster was probably due solely to bad luck—but no one could say for sure.

* * *

There was a depressing logic behind New Jersey’s faux approach to cluster investigation. It had its roots in a century-long quest to verify neighborhood cancer clusters scientifically—a quixotic effort that tantalized and ultimately frustrated everyone who attempted it, including some of the greatest statisticians of the twentieth century.

Back when most cancers were believed to be infectious—the triumphs of Louis Pasteur and Robert Koch in the late nineteenth century convinced many Europeans and Americans of that era that all diseases were transmissible—the existence of “cancer houses” plagued by high numbers of cases was taken for granted. Just like Linda Gillick a century later, Victorians of all social strata, from aristocratic reformers to the working poor, looked around their communities, saw that cancer was not evenly distributed, and assumed that a hidden cause must be at work. With rare exceptions, cancer was treated like a shameful plague; many people thought it was related to venereal disease, and others believed that its victims should be barred from hospitals as risks to public health.

A few physicians tried to apply scientific scrutiny to the notion of “cancer houses.” One of the most dedicated was an otherwise obscure Englishman named Thomas Law Webb. He compiled the addresses of 377 people who died of cancer between 1837 and 1910 in the industrial town of Madeley. In 1911, Webb gave his data to the person in Britain best qualified to analyze it, a hot-tempered polymath named Karl Pearson. A man of breathtakingly broad interests—he was a philosopher, poet, songwriter, and novelist in his spare time—Pearson’s greatest passion was the development of mathematical statistics as a full-fledged academic discipline and as a tool for solving social problems. In 1911, the same year he analyzed Thomas Law Webb’s cancer records from Madeley, Pearson founded the world’s first academic department of applied statistics at University College London, which became the global incubator of the nascent discipline of biostatistics.

There is no indication in the historical record of how Pearson found out about Webb’s remarkable collection of cancer data, but it is easy to see why he would be eager to analyze it: Webb’s records were a way for Pearson to use his new methods of statistical analysis to test the widely held belief in “cancer houses.” He was especially interested in what came to be known as significance testing, or statistical significance. The concept is simple: Any apparent pattern within a group of numbers, or apparent correlation between two or more groups of numbers, should be tested to determine how likely it is that the pattern or correlation is due to chance and not to some other cause.

The 377 Madeley residents who had died of cancer between 1837 and 1910 lived in 354 houses, according to Webb’s records. To determine whether cases were clustering for reasons other than chance, Pearson first needed to estimate how many homes would have multiple cases if those fatal cancer cases were distributed at random. Using the statistical methods he had developed, Pearson calculated that if cancer were distributed randomly among the nearly three thousand residences in Madeley, there would be about 331 houses with one cancer death, twenty-two with two deaths, and one with three. But Webb’s records showed there were actually 315 with one death, twenty homes with two, six homes with three, and one unfortunate home in which four residents died of cancer over the seventy-three-year period.

To a non-statistician, the two sets of numbers might not have looked very different. But to Pearson they were night and day. Could it just be a fluke? Not according to Pearson. “The probability that such a distribution could arise from random sampling is only one in many, many millions,” he concluded after conducting a series of probability experiments. Pearson thought that his provocative findings merited a comprehensive follow-up investigation, including comparisons to nonindustrial towns and a detailed breakdown of cases by age and occupation. But there was no follow-up study. In that sense, Pearson was the first of a long line of cluster hunters whose tantalizing tentative findings failed to attract the interest and resources needed to confirm or refute them. He never published again on the topic in his long career, which ended with his death in 1936.

The federal government’s first foray into cancer cluster investigation had accomplished very little, yet by the late 1970s the CDC was conducting more cluster studies than ever and was increasingly focusing on chemical pollutants, not viruses—all because of demands from citizens and politicians. Publicity over Love Canal and other environmental disasters had sparked a boom in requests for cluster investigations, especially in states that had cancer registries. State health departments were fielding about fifteen hundred such requests per year. The most worrisome of those requests—the ones with a plausible suspect cause and rates high enough to make random variation an unlikely explanation—were passed on to the CDC, which by the 1980s was conducting an average of five or six cluster investigations each year. Hundreds more were at least crudely investigated by state health departments, as Michael Berry was doing in New Jersey.

The complaint-driven genesis of almost all of those cluster investigations was turning out to be a profound weakness, and not just because anxious members of the public often reported cancer patterns that turned out to be unexceptional. There was a deeper issue that could not be solved by the clever use of incidence comparisons and statistical significance tests. This was the problem of hidden multiple comparisons. The case-control studies popularized by Richard Doll in the 1950s were scientifically elegant not only because they were large enough to reduce statistical uncertainty but also because they began with a hypothesis. Doll wanted to test the proposition that smoking was a risk factor for lung cancer, so he assembled a large group of cases and compared them to a similar but cancer-free control group. Most cluster studies, by contrast, turned deductive science on its head. Instead of starting with a testable cause-and-effect hypothesis, they began with someone cherry-picking a suspicious cluster of cases out of a much larger population.

Even when governments made extraordinary efforts to confirm a reported neighborhood cluster via environmental testing, the results were ambiguous. That was certainly true of what was the most famous and carefully documented residential cancer cluster of the era: the twelve cases of childhood leukemia in Woburn, Massachusetts, where just five cases would have been expected based on the demographics of that blue-collar town north of Boston. Later, in the 1990s, the Woburn cluster would become famous (and very influential, in Toms River) because of the book and movie "A Civil Action" and because of a state study that found an association between childhood leukemia and mothers who drank contaminated water—an exceedingly rare cause-and-effect confirmation of a residential cluster. But in the 1980s, two smaller studies in Woburn—one conducted by government scientists, the other by biostatisticians working with the affected families—looked at the leukemia–drinking water hypothesis and came to opposing conclusions.

By the late 1980s, there was no avoiding the unsettling conclusion: Neighborhood cancer cluster studies appeared to be a fool’s errand, a source of perpetual embarrassment to the agencies that conducted them and the politicians who had to defend their unsatisfying results. In fact, a rough consensus was emerging among cluster researchers in state health departments and the CDC: Governments should get out of the business of investigating residential cancer clusters, no matter how vociferously the public demanded them. To lay the groundwork for such a controversial policy change, they organized a meeting at the Hotel Intercontinental in Atlanta, near the CDC headquarters. The 1989 gathering was officially known as the National Conference on the Clustering of Health Events, but it quickly acquired a much catchier name: the cluster buster conference.

To deliver the opening address, the organizers selected a paragon of the epidemiology establishment. Kenneth Rothman of Boston University had written two popular textbooks and was the founding editor of the journal Epidemiology. He got right to the point: “I am about to tell you that there is little scientific value in the study of disease clusters,” he bluntly told the assembled scientists, some of whom—including Clark Heath—had spent their professional lives doing just that. “With very few exceptions, there is little scientific or public health purpose to investigate individual disease clusters at all.” Many of the researchers who followed Rothman at the podium agreed, especially for residential clusters. But they all acknowledged struggling with the consequences of ignoring requests for investigations. As one of the most experienced cluster investigators, Alan Bender of the Minnesota Department of Health, later told The New Yorker magazine: “Look, you can’t just kiss people off.” Instead, he suggested a step-by-step response system that emphasized establishing a rapport with worried callers. Seventy-five percent of the time, he reported, “one or two telephone calls and a follow-up letter will satisfactorily answer the caller’s concerns.”

The cluster buster conference had a powerful effect. Just months after it ended, all investigations of non-occupational cancer clusters in the United States had stopped, with very few exceptions. The CDC issued guidelines urging states to adopt Minnesota-style systems and ended its own cluster investigations, at least for a while. “The state health departments didn’t want to do these cluster investigations anyway, and now they could stop and say they were just doing what the CDC wants,” remembered Daniel Wartenberg, a New Jersey epidemiologist who attended and argued in vain against the majority view. Instead, Minnesota’s Bender carried the day with his categorical dismissal of cluster studies. “The reality,” he told The New Yorker, “is that they’re an absolute, total, and complete waste of taxpayer dollars.”

Now some of the most prominent cluster-hunters in the world were confirming Berry’s own doubts about what he was doing.

* * *

The request Michael Berry received on March 13, 1995, for another investigation of childhood cancer in Toms River sounded to Berry like another exercise in cluster-hunting futility: a vague complaint, a small community, very few cases of cancer and no obvious culprits—at least, as far as Berry knew at the time. Yet he did not try to talk Steve Jones into withdrawing his request. Jones was not an ordinary citizen. He worked at the ATSDR, and he was passing along a complaint from another authority figure, an oncology nurse in one of the most prestigious children’s hospitals in the world. Just as importantly, Toms River was not just another community. By 1995, the logbook in Berry’s office showed that the state health department had received five calls about childhood cancer in Toms River. The first three—in 1982, 1983, and 1984—were not followed up, but the 1986 request from Chuck Kauffman and the 1991 request from Robert Gialanella had each prompted Berry to undertake an incidence analysis, the second of which revealed that pediatric brain tumors and leukemias seemed to be on the rise during the late 1980s, even if the increase was not large enough to be statistically significant.

There was another worrisome factor, too. The state health department had just completed a study comparing childhood cancer incidence in New Jersey’s twenty-one counties. The 1994 analysis found that from 1980 to 1988, the overall childhood cancer rate in Ocean County was well above the statewide average. That troubled Berry, and it bothered him even more that the rates in Ocean seemed to be especially high for the category of cancers that Robert Gialanella and others had been most concerned about: brain tumors. Thirty-seven Ocean County children under age fourteen had been diagnosed with brain and nervous system tumors between 1980 and 1988, when the overall rate for New Jersey suggested there should have been just twenty-two. In a county with eighty thousand children, that was 70 percent more than expected. And now Steve Jones was telling him that the Philadelphia nurse was especially concerned about brain tumors in Toms River kids.

Berry set aside his reservations and told Jones that he would look into it.

Excerpted from the book "TOMS RIVER" by Dan Fagin. Copyright © 2013 by Dan Fagin. Reprinted by arrangement with Bantam Books, an imprint of The Random House Publishing Group, a division of Random House, Inc. All rights reserved.


By Dan Fagin

MORE FROM Dan Fagin


Related Topics ------------------------------------------

Books Cancer Editor's Picks Environment Toms River