A genetic "Minority Report": How corporate DNA testing could put us at risk

23andMe's FDA problems are just the beginning -- the company's DNA tests open up a wealth of privacy concerns

Published January 26, 2014 1:30PM (EST)

                (<a href='http://www.istockphoto.com/user_view.php?id=1189015'>cosmin4000</a> via <a href='http://www.istockphoto.com/'>iStock</a>/Salon)
(cosmin4000 via iStock/Salon)

On Nov. 22, 2013, the FDA sent a now-infamous letter to the genetic research startup 23andMe, ordering the company to stop marketing some of its personal DNA testing kits. In its letter, the agency told 23andMe that it was concerned about the possibility of erroneous test results, about false positives and false negatives. The FDA warned that false positives – for example, being told that one has a high risk of breast cancer when really one doesn’t – might lead customers to seek expensive testing or medical treatment that they don’t really need. False negatives – which are just the opposite – might lead customers to ignore serious health problems or deviate from a prescribed treatment regime. The company had been out of contact with the FDA since May of 2013, and had not filed the required information to allay the bureau’s concerns.

When word about the letter got out, the ever-ready machine of Internet journalism whirred quickly to life. Defenders of the genetic research firm argued that the information, if used properly and with a physician’s supervision, is not only a wondrous tool for protecting health and prolonging life, but a fascinating look into the mysteries of one’s genetic code. Besides, they continued, the technology is here to stay; fighting it will be like lashing the sea to stop the tide. Critics, however, shared the FDA’s worries that the company’s products might drive a frenzy of self-diagnosis and hypochondria. In their view, putting unfiltered diagnostic information in an anxiety-prone person’s hands could be dangerous – better to leave it to the trained judgment of a licensed doctor. Neurologists have reported, for example, that otherwise healthy twenty-somethings, upon getting back their 23andMe results, have marched into their clinics demanding MRIs to check for signs of Alzheimer’s.

But how does 23andMe's process actually work? Upon receiving a saliva sample from a customer, 23andMe draws statistical inferences about that person’s likelihood of having certain diseases based on what are called “single nucleotide polymorphisms” or “SNPs” (pronounced “snips”). A SNP is a genetic mutation at a single, specific location along a person’s DNA strand. Rather than the base pair that the vast majority of the population has for a given location, an individual with a SNP has a different base pair, which can lead to anomalies in the production of amino acids (i.e., proteins in the body). When 23andMe finds certain SNPs in someone’s DNA, it then cross-references its vast databases of medical literature, which contain findings from thousands of studies showing correlations between certain SNPs and certain diseases. To guard against misinforming its customers, 23andMe maintains precise standards about which studies make it into its databases. The political fight about the company, thus, is drawing the troops into a swamp – into the muddy marsh of arguing over whether consumers are intelligent enough to process this sort of sophisticated statistical and epidemiological information on their own. The question quickly becomes not about 23andMe’s little kits, but about government paternalism more generally.

Legally speaking, however, the entire argument is moot. There is no need to wade into the muck of one’s feelings about big-government paternalism or the perils of new technologies – at least not right now. And that is for a simple reason: The FDA is right. According to the law, 23andMe’s kit is a “medical device,” which federal law defines as any product useful in diagnosis or treatment, which means that the kit is subject to the FDA’s regulations about the marketing of medical products. Those regulations require 23andMe to provide the FDA with information about its statistical methods, to conduct studies of its products’ effects, and to address the agency’s concerns about the risks of erroneous test results. It hasn’t done so. In the real world, thus, 23andMe will have to comply the FDA’s guidelines, and it will have to find some way of reassuring the government that its brightly-colored little kit is not actually Pandora’s box. From a legal perspective, that is the end of the discussion.

Which is not, of course, to say that it should end the discussion. There is another aspect of 23andMe’s business, one which has received less attention from the media (with the exception of an excellent writeup in Scientific American by Charles Seife), but which is, in actuality, both equally troubling and equally fascinating. The company also houses a sizable research wing. 23andMe intends to aggregate the genetic information it receives and correlate that information with self-supplied data from customers about their biological traits – for instance, whether they clasp their hands with left thumb over right or vice versa; whether black coffee tastes bitter to them; or, more seriously, whether they have Parkinson’s disease. According to its website, the company hopes to use this information to find new ways to predict the incidence of disease in the population.

If the project is successful, 23andMe’s roof would shelter a veritable warehouse of genetic information, mapping correlations between phenotypes, SNPs and diseases across a huge swath of the population (albeit not a particularly diverse swath, given the company’s likely market demographics). To date, 23andMe says that it has received saliva samples from approximately 500,000 people – a figure many times the size of those employed in most large-scale genetic studies – and the company has only existed since 2006. Once it clears its name with the FDA, the amount of genetic information available to the firm’s research arm will only grow. “The long game here is not to make money selling kits, although the kits are essential to get the base-level data,” said Patrick Chung, a 23andMe board member. And the company’s privacy policy, it’s worth noting, makes no promises that it will not share aggregate-level genetic data with its vendors and affiliates, such as the company that manufactures the chips used in processing saliva samples.

In a dashed-off writeup, Ezra Klein blithely opined that the company’s research wing is simply “an experiment in big-data genetics.” Citing Harvard political science professor Daniel Carpenter, who recently published a lengthy history of the FDA, Klein argued that new technologies like 23andMe's might provide a salubrious occasion to reevaluate how the law deals with cutting-edge bioinformatics. (How exactly the rules should change, though, Klein shrewdly declined to say.) There might be something to the point, but saying that we might possibly want to eventually give some consideration to maybe letting the political process (in consultation with appropriate experts, of course) arrive at some new rules about how such products might or might not be regulated is, ultimately, not to say much at all. It is almost as if Klein sees that there is a steep cliff off which he might take a sharp plunge into a sea of difficult and frightening questions, and decides instead simply to gesture offhandedly in its general direction.

But the only way forward is downward. Whatever happens to 23andMe, the idea of aggregating and cross-referencing huge amounts of genetic information is now a cultural reality. A private corporation, backed by some of the largest tech companies in the world (including Google), is planning to do genetic research through statistical modeling on an unprecedented scale. The implications of such a venture are profound, but the national conversation about the company has focused almost exclusively on what consumers will do with the information. This is not to say that it is not perfectly reasonable to have such a conversation. In fact, it is even reasonable to slosh around in the muddy waters of the swamp – the battle over paternalism and the authority of the medical establishment. But if we dare not ask, for example, whether an advanced, large-scale statistical model of human genetics might not be used to make predictions about, say, criminal behavior, or academic success, or lifetime income, who will? Let us dare to dive into the sea.

To be clear, at present, 23andMe’s databases are used for properly epidemiological purposes. And the company seems to be quite serious about protecting the privacy of customer data and fostering trust. Identifying information, for example, is encrypted and stored separately from genetic material. I am not, that is, accusing them of wrongdoing  – of conspiring to create a genetic dystopia like the ones envisioned in films like “Gattaca” and “Minority Report.” Rather, I am suggesting that the idea of a massive genetic database holds all the ominous potential, if not used with extreme circumspection, to lead exactly there.

For instance, there is no reason, in principle, why the information available to 23andMe could not be used to make predictions about future crimes. With the information in 23andMe’s hands, it would be possible, for example, to see whether you have the so-called Warrior Gene, the presence of which, a famous study found, correlates with a high likelihood of violent behavior, and then send you a “How likely are you to become a murderer?” report. There is also no reason why the company’s statistical techniques could not be used to make predictions about intelligence. For example, the journal Nature reported last May that the Beijing Genomics Institute intends to study 1,600 mathematically and verbally gifted children (as measured by IQ) in a search for SNPs that might explain their extraordinary intelligence. If studies like this disclose significant results, and if their methods meet 23andMe’s guidelines for scientific integrity, the company could begin marketing an all-new kit called “Will your baby become a super genius?”

From there, it is not hard to imagine going even further. Earlier this year, in Maryland v. King, the Supreme Court held, in a 5-4 ruling, that it was constitutional for the police to collect a sample of a person’s DNA when he or she is arrested for – not even formally charged with – a crime. In a decision that none other than Justice Scalia criticized, in a fiery dissent, as authorizing the creation of a “genetic Panopticon,” the Court concluded that it was constitutional for the police to compare these DNA samples with national databases of DNA evidence found at the scenes of unsolved crimes. What is to stop the police from going further – from working together with bioinformatics companies like 23andMe to build a crime-prediction and crime-detection mega-model? Although 23andMe’s information is currently stored in a form that makes it difficult for law enforcement to access, it would be possible in principle to cross-reference all three sources of information – bioinformatics customers, criminal arrestees and crime-scene evidence. With such a tool, the police could both find criminals foolish enough to want to know about their risks of disease and, more frighteningly, find SNPs that are correlated with getting arrested or committing crimes. Are we supposed to take comfort in the company’s own privacy policy, which explicitly provides that it will surrender individual-level genetic material if required to do so by law? “Minority Report” indeed.

To many readers, worrying about these uses – and about whether they will lead to some sort of “Gattaca”-like dystopia – might sound like hyperbole, if not the ravings of a paranoid lunatic. After all, 23andMe has announced no plans to create such products. Asked for comment about these uses, 23andMe’s Catherine Afarian quite reasonably told me that they are, at this point, “more science fiction than science fact.” Furthermore, in 2008, Congress passed, with enormous bipartisan support and the signature of President George W. Bush, the Genetic Information Nondiscrimination Act or “GINA.” GINA prohibits the use of information about a person’s genetic predispositions in pricing health insurance or making employment decisions, such as hiring and promotion. So what, one might reasonably ask, is the problem? Isn’t the only remaining question, indeed, the one about whether we think people are smart enough to process this sort of genetic information on their own?

But the point, again, is not only about this particular company, but about the broader trend toward making predictions about what will happen to someone based on the mutations in his or her genes. 23andMe is just the beginning; their kit is merely the prototype for a kind of bioinformatics product that companies will package and market to us in the years to come. They have, in fact, just proved that we are eager to buy. And while the passage of GINA undoubtedly represents a reassuring and admirable step toward ensuring that discoveries about our genetic predispositions are not used against us (though even then only in specific contexts), it cannot stop the momentum of our broader culture. Curiosity is a rushing river. If people want to know, the law cannot stop them from finding out. The only law that reigns when you click the button and send 23andMe your $99, when you follow the instructions and gather your saliva, when you then raptly read – with intense and morbid curiosity – the report that the company sends back, is the law of the market. At the end of the day, there is only you and the choices you make.

What, then, are we supposed to do? There is at least one serious response available. We might, that is, not give up on the idea that we can reverse the tide. We have the power, if we choose to exercise it, to pass laws that would serve as significant bulwarks against the more insidious uses to which genetic data could be put. For example, we might still make laws to prohibit the police practices that the Supreme Court said were constitutional in Marlyand v. King. Or we might pass a broader, more robust genetic non-discrimination act, one that would bar the use of genetic information in education, in police profiling, in commerce and countless other arenas in which we otherwise enjoy civil rights. Or we might even pass a genetic privacy law, one that would outlaw certain data-sharing and retention practices and impose heavy penalties for violations. (This last option seems particularly far-fetched in a world where we apparently tolerate widespread spying by our own government.)

But I doubt that anything like this will happen. As it stands today, even if the FDA – the agency we all, in practice, must rely on to ensure the safety and integrity of these kinds of products – makes some new rules, 23andMe’s kits, adorned with new warnings and restrictions, can return to market. And even if you, personally, decide not to participate in 23andMe’s process, there is nothing you can do to stop others from placing an order for a kit. It appears that all we can do is hope. Maybe, if they are scrupulous, 23andMe will do nothing more than make contributions to epidemiology and to our understanding of our biological makeup. After all, there are millions of people in the world suffering with serious diseases, many of which likely have genetic links. Surely we should not wish to stop the gathering of data that could lead to major scientific breakthroughs?

Indeed not. It is, however, the good name of science that is treated most roughly in the national conversation about 23andMe. Many working scientists (ironically including some at 23andMe) maintain a ferocious skepticism about the power of correlational genetic studies to reveal anything meaningful about the phenomena that they purport to measure. Asked for comment about the study of super-intelligent youngsters mentioned above, geneticist Peter Visscher from the University of Queensland, Brisbane, told Nature: “Even for human height, where you have samples of hundreds of thousands, the prediction you’d get for a newborn person isn’t very accurate.” Likewise, in response to evidence about the so-called Warrior Gene, Harvard neuroscientist Joshua Buckholtz wrote in NOVA that, in his view, “any test for a single genetic marker will likely be meaningless for either explaining or predicting [individual] human behavior.” And in general, the most skeptical readers of the findings from scientific studies tend to be scientists themselves.

All of which serves as a reminder that 23andMe is, in the final analysis, a marketer of data, a computer-science firm and a builder of complex correlational models. If we are convinced by what it tells us, it is only because we have implicitly accepted its premises – that our own behavior owes its origins to our biology, and that if we know everything about our genes, the events of our lives follow as night the day. Our fates are sealed on the day we are born. But these notions conflate the external markers and tools of science – the statistical methods known as “causal inference” – with scientific knowledge itself. Human understanding is broader, deeper and wider than can be contained in any formal system built within it. On the day when all these companies unite, bringing together all their predictive data into one über-model, will we kneel around the humming servers, in awe that we have actually built Laplace’s demon? Will we then ask them how we should live our lives?

By Benjamin Winterhalter

Benjamin Winterhalter is a writer and editor whose work has also appeared in The Atlantic, The Boston Globe, the LA Review of Books, and elsewhere. On Twitter @BAWinterhalter.

MORE FROM Benjamin Winterhalter

Related Topics ------------------------------------------

23andme Dna Genetics Privacy