Are we playing dice with the biosphere?

Veteran tech writer Denise Caruso warns us how little we really know about genetic engineering -- and says there's a smarter way to place bets on new technology.

Published March 12, 2007 1:00PM (EDT)

Donald Rumsfeld once befuddled a press conference with a brief lecture on epistemology, explaining the difference between "known knowns," "known unknowns," and "unknown unknowns -- the ones we don't know we don't know." The remark earned the former secretary of defense a "Foot in Mouth" prize. But the distinction has its uses.

"Intervention," Denise Caruso's new book about biotechnology, is all about unknown unknowns. It's a stark survey of how little we know about the risks of genetic engineering -- and, further, of how ignorant we are about how little we know.

"Intervention: Confronting the Real Risks of Genetic Engineering and Life on a Biotech Planet" offers a profoundly persuasive and endlessly disquieting portrait of the risks our species is blindly taking with biotechnology. Could the introduction of genetically modified products into our environment be responsible for seemingly disconnected problems -- like, say, the strange disappearance of much of the honeybee population? Is meat from cloned animals really as safe as the Food and Drug Administration maintains?

Caruso doesn't claim to have answers; rather, she credibly argues that we should distrust anybody (including, or especially, regulators) making such a claim. And we ought to face that uncertainty head on, not deny it until some irreversible catastrophe forces our eyes open.

"We are actually a giant biology experiment, this planet, right now," Caruso says. "There's no control."

Caruso, a veteran technology writer (full disclosure: I've known her as a media colleague for years) who now runs the Hybrid Vigor Institute, is no know-nothing, nor does her knee jerk. Her quarrel with the processes that have swept transgenic foods and products into our farm fields and onto our dinner tables rests not on anecdotes or emotional appeals but rather on solid scientific research.

"Intervention's" scrupulous refusal to sensationalize only makes the alarms it raises more harrowing. But the book also offer readers a lifeline. Drawing on the work of the National Academies (like the 1994 "Understanding Risk" report), "Intervention" outlines a better approach to assessing the risks of new technologies. Under this "analytical deliberative process," interested experts from different fields assess risks from a broad perspective, asking questions rather than deliberately ignoring blank spots in our knowledge for want of data.

These groups can cast a more probing light on the choices our government and industry are currently making in the dark. In essence, they aim to drag "unknown unknowns" into the light. So does "Intervention."

I talked with Caruso recently at Salon's San Francisco office.

You spent years writing about the technology industry. How did this book come about?

It was sheerly out of reaction to meeting [molecular biologist] Roger Brent. He laughs when I say this, and I say it with all the love in my heart, but he's one of the most macho scientists I've ever met in my life. His lineage -- in academics, that means who your Ph.D. advisor was -- is a guy named Mark Ptashne, whose Ph.D. advisor was James Watson. When I met Roger, his attitude was: What's a nice girl like you doing being afraid of eating genetically modified food? Don't you know that you could eat 10 kilos of Bt potatoes [Bacillus thuringiensis is used to modify crops transgenically for insect resistance], and nothing would happen to you?

I didn't know that much about biology. But when he said that, I said, "I don't think you actually know that to be true. I don't know how you could know that to be true." And we went back and forth on it, and he finally conceded -- which I was really surprised about. He said, "So how do we protect the public, but not stop science from progressing at the same time?"

And I thought: We have to redefine risk. That was the original, boring title of the book -- "Redefining Risk." For me, we couldn't even have a conversation about this unless we had the same definition of risk. So I started on this subject as a way to prove my point.

My first reaction to "Intervention" was, it's a book about how much we don't know.

When I started Hybrid Vigor, one of the first things I got invited to was at Columbia, a Sloan Foundation meeting. They do one every so many years, called "The Known, the Unknown and the Unknowable." It's really interdisciplinary: They bring together 20 or 25 researchers from a bunch of different fields. And they all presented what in their field is known, is unknown and what so far is unknowable.

I think that this is one of the most important ideas to keep in mind when you're assessing the risks of innovations in particular. Because, by definition, almost everything is unknown about them. It's new -- ergo, no history. You can't necessarily apply the same models.

With biotech products, the regulatory attitude seems to be "innocent until proven guilty." Unless someone has somehow been able to accumulate enough data to prove some kind of harm, the assumption is that everything is just fine. Is that a fair generalization?

It's actually a little worse than that. The way it works is that the government and industry decide together what the risk model is going to be for something new. And then they gather evidence that fits within that risk model. And that's the only conversation you get to have, unless there's some overwhelmingly obvious piece of evidence. As for example happened with transgenic bentgrass, which the government just slapped the USDA's hand for, for being so cavalier about -- this noxious weed that they just managed to spread.

So we don't get to come into the conversation. And when I say "we," I don't even mean the public -- I mean other experts from other fields, people who have relevant technical information. They don't even get to be a part of the conversation until the decision has already been made. Two or three things that have happened in the last month or so like that drive me nuts -- like the cloned meat and milk thing. The FDA decides, based on this very narrow bit of data that they've got about nutritional equivalence, that cloned meat and milk is safe. And then they put it out for public comment for four weeks. They've been considering this for five years! And people who have an issue with it have a month.

You're very careful never to go beyond the evidence, but you're basically saying that our current scientific establishment's way of handling this huge field, biotech, has been a disaster.

Yes. I think if I could have modified it every time I wrote it, I would limit it to a subset, even, of molecular biology. There's a very macho group of people who really believe that this is it. This is the way it is. Full steam ahead.

The other fields have been locked out of the decision-making process?

I think most of them have given up. Unless you're doing molecular biology, genomics, nowadays, it's really hard to get funding. So if you're an organismal biologist, and you study butterflies or you study cows or you study anything that's a whole organism operating in the universe, you can't get money.

Is that because of the investment boom in biotech?

It's not that simple -- that would be more heinous, and I could be more condemning of it. The lines get very blurry. The truth is that, always in science, from the beginning, from the Medicis, you gotta get the money to do the work, right? So we had a movement in the culture, 30 or so years ago, toward finding ways to monetize taxpayer-funded research. The idea was sound: Get this good stuff out of the universities and into the society.

They called it "technology transfer" when I was in college in the '70s and early '80s. The concern was that we'd end up undermining the entire public basis for scientific progress. Once you start locking up research, you're in trouble.

It started out to be an honest move of the scientific community toward where they could get the money to support their work. But now you see this very tight, laser-beam focus on molecular genetics. How molecular genetics interacts with the environment in the rest of the world is obviously really important, but that gets not much attention at all.

In the book you talk about the drawbacks of agricultural monocultures. This sounds almost like an intellectual monoculture.

Absolutely, a monoculture in science. And it's really disheartening for the people who are good scientists who are not in that mode. It's tragic. One of the people I talked to, Kim Waddell, who was the study director for the National Academies, said we're going to wake up one day, and we're going to find that all of this amazing knowledge that these people had about how organisms operate and interact with each other in the universe, it's going to be gone. And we'll have to relearn it again.

That's a shame, and from the perspective of risk, it hurts -- it keeps us from being able to do effective risk assessments of these things, early on in the process. Because these people know what happens in the world.

One of the parallels you bring in is the saga of the nuclear industry -- as an example of a technological innovation where we failed to assess risk properly in the past. The establishment perspective was, there were no problems there; then we had some disasters and the public viewed it as a big problem. Today, though, nuclear power is once again being touted by people like the Wall Street Journal editorial page --

Or Peter Schwartz!

-- as a good option we should be reconsidering. They're arguing that the media and the public overreacted to a handful of incidents and squashed a whole industry.

That's absolutely the argument. Here's where the argument is ridiculous: It's right -- it was an overreaction. It was an overreaction because we were oversold that stuff to start off with. You tell me that nuclear energy is 100 percent safe and too cheap to meter, and you beat on this in every possible conceivable venue you can, and tell people who disagree with you that they're stupid Luddite jerks. It's the same thing in biotech, right? You're an idiot if you disagree. The people who were doing research on anti-nuclear stuff, they had really fringe-y people, too. But they also had people who were making legitimate cases about stuff like, how do you dispose of fuel?

The point for me is, if you want to bring a risky technology into the culture, tell the truth about it to start off with! This is the other main point in the book. Human beings, ordinary human beings like me and you, people who read Salon: You sit us down, and you explain to us what's happening. We have a problem, an energy problem. None of the solutions are perfect. Some are riskier than others. Here are our alternatives. We need to make a decision as a society which one we're going to do. Let's open up this process and make it transparent. Then you have what is truly risk communication, as opposed to propaganda.

These people keep telling us, "no science-based evidence..." Whose science? Whose evidence? I spent three years finding evidence on the other side of this coin that's absolutely peer-reviewed, valid, hard-core research. So it frustrates me.

The same thing happened with DDT, right? DDT is fabulous. DDT ended up being an enormous destructive force in the ecosystem because we decided to use it on everything -- just like we did with antibiotics. And we changed the ecosystem, in some cases irrevocably, and now people are saying, we'd like to bring DDT back. Maybe there are still some mosquitoes somewhere that are susceptible to it. But basically, they're resistant now.

The opportunity to use it effectively is gone.

I feel the same way about genetically modified anything. I don't know whether we have a problem that requires that solution yet. We have a technology. And we have a technology that people want to do something with. And that some people think will be very valuable for human health.

It's eye-opening in "Intervention" when you review how essentially unprofitable, so far, how ineffective, and how really just messy the whole world of biotech is. The field's stereotype is like the picture on the cover of your book: these clean glowing vials of magical elixirs, and tiny little pure bits of gene code getting snipped into other codes, and presto! You get something wonderful.

It is untrue.

Instead, you describe a process with all sorts of surprising baggage, like the "cassettes": the other stuff that gets injected from one organism into another in order to insert the gene you're trying to add. I read that and thought, I'm a journalist and publicly aware person -- I should have known that!

Nobody knows this stuff. I had to figure it all out, too. I was blown away. Compare that to what's on the biotechnology industry's Web site about what biotech is: You take a gene out, and you put a gene in. And the same on the FDA's Web site. They misrepresent the technology to the public, so that we have this false sense of simplicity. And if it's really that simple, it must be safe.

At the end of the book, you introduce the idea of collaborative risk assessment, or "the analytic deliberative process," and describe one example you and Baruch Fischhoff conducted -- a model process assessing the risk of human transplants of organs from genetically modified pigs. Is what you did identical to what's described in the National Academies' "Understanding Risk" report?

We took it a step further. One of the things we realized early in the work on genetic engineering in particular was that we were often talking about events that were very likely of very low probability of occurrence -- but if they did occur, the consequences would be extreme. And generally they were things that had never been seen before. Like, there's never before in nature been a time when a living pig cell rubbed up against a living human cell and had the opportunity to exchange retroviruses. The pig valves that people hear about getting inserted today -- that's dead tissue, those are not live cells.

So you have this never-before-seen situation. For traditional risk analysis, quantitative, there's literally no data. You can try to extrapolate from something that's happened before with greater or lesser degrees of success; you won't know till it's too late. But what we found by talking to a bunch of experts outside of our fields was that scenarios were a good way to think about this stuff.

Now, traditional risk analysts hear the word "scenario" and they say, "You're making up stories!" And a scenario person will look at somebody who's trying to do a quantitative risk analysis on something that's never happened before and say, "You're making up numbers!"

So we tried to figure out a way, and I think we did it, where you could come up with what Baruch called a "computable model." You use scenarios to help you draw a map of the known, the unknown and the unknowable. That gives you a picture of the risk. And as time goes on and you get more data, you can compute more pieces of this model and you can start to come up with actual probability calculations. That's what we did that was different -- we went into the scenario stage. There are lots of ways to do analytic deliberative processes where you don't have to go to those lengths. But the process itself exists. It's been used in lots of very tricky technological situations, and it's very effective.

So what you're doing is pulling together a much broader set of perspectives.

For a traditional risk assessment, somebody would be assigned to do this job, and they would figure out what the model was, and they would go ask whatever experts they thought were important, but it would really be driven by one person -- one paradigm, if you will. In this model you blow the walls off of that. You come into a room; you sit a bunch of people around the table from a bunch of different areas of expertise. The catchphrase is "interested and affected parties" -- experts who would have some relevance to the topic, stakeholders. And you open up the question -- you say, This is the problem on the table. It's purely about making a decision: Should we move forward with this or should we not?

Just by starting there, that's different from saying, we have this new product, can it be approved?

I've been puzzling and trying not to be too cynical for three and a half years now about why this doesn't happen more, when it's so obviously the most effective way to do risk assessment for these kinds of problems. It takes a lot of time and it costs money, and you don't poop a ratio out the other end of it where you can say, OK, we've decided, this is what the risk is. It's much more ambiguous. It's very uncomfortable. People do not like to be uncomfortable about this kind of knowledge.

We need to train our culture to understand that there are things we don't know. And we need to build learning into the process of doing these assessments, so that you don't have to undo a regulation when you find out that something has turned out to be risky. You need a way to accept input while you're moving along. The [analytical deliberative] process makes risk assessment very -- porous.

You also make a strong case that it leads to better science. You sit people down to talk about the transgenic pig organs, and you get questions like, What do you do with the pig carcasses?

Even before "Understanding Risk," there was another National Academies report before that called "Science and Judgment in Risk Assessment," and they really raked the EPA over the coals. They said, You have got to stop moving away from uncertainty! You guys won't shine the light in the dark places. You have got to look at what you don't know.

It's very difficult to train people to go into the places where it's dark. But people like us, who have to eat and drink and breathe these products -- I'm happy to sail into those dark places. Let's talk about them!

We're already in them.

Exactly.

What's to prevent the analytical deliberative process from being a sort of interminable town meeting where people are just spouting their arguments rather than listening?

But these aren't hearings -- they're working meetings. Imagine: We're sitting in a conference room at Salon now. There's a big table here, 12 or 13 chairs around the table. If we were doing a risk assessment, there would probably be a whiteboard at the front of this room, and an analyst or two or three who had been asked by the very people at this table, What about this piece of science -- do we know this? If we understood that piece of science, that would help us understand whether or not this was risky, correct?

So then you would spend a meeting coming up with those questions. The analysts would go away, they would find out as many answers as they could, they would come back with the best answers that they had, find out new questions, see if there was some way they could fund the research or go find the information somewhere else.

It's an inquiry.

It really is an assessment, not a hearing. There's a facilitator, but everybody who's at the table is an equal participant. Everybody has a voice. My feeling is, if you really want to get anything done, you have to lop off both ends of the curve. You can't have the person who invented the technology at the table, and you can't have the radical anti-biotech person. I'm interested in real conversation and finding out what is the benefit that we will get from whatever this is: Is it enough, is it sufficient, that we are willing to take the risk? The risk as we all perceive it to be -- not as based on industry data submitted to a regulatory agency for a regulation that the industry wrote in the first place! That's not OK with me.

You describe what the biotech industry calls "biological confinement" -- finding ways using genetic engineering itself to try to prevent transgenic organisms from spreading. This became necessary because the idea of actual physical-world confinement of this stuff turns out to be impossible.

Isn't that amazing? Let's invent new technology to confine the stuff that we invented that we couldn't control in the first place.

I couldn't help thinking of the software industry's similar record of introducing new "solutions" each year intended to fix the problems the same industry introduced last year.

My feeling is that the human race is smarter than that. We know how to do this. This doesn't have to be the full employment act for the scientific community, so that they're forever fixing the things that they broke and then inventing new things that will break that they have to fix. We can do this right.

I think the only way that I was able to keep going in writing this book was to realize that we actually have the solution. It's powerful. And it's very democratic -- it's not at all management by committee, which is the other thing people get very afraid of about this. You have to make sure you don't get trapped in the committee mentality. But I would take that over the risk of leaving most of the uncertainty out of the equation, just for the simplicity of being able to make an equation out of it.

Any reactions to the book that took you by surprise?

It's still pretty early. But it's surprising to me that the only really serious criticism of the book I've gotten so far has been from someone on the anti-biotech side.

Because you're not saying it's bad?

I'm not saying it's bad. I'm saying, we don't know. And it behooves us to try to find out.

You started out with a book deal from a major publisher for "Intervention." Then the project took longer than planned.

I was supposed to finish it in a year. My first book -- what was I thinking? It was insane. I had to teach myself everything. To write a book for the general public, it's a fallacy to think that you have to know less -- you have to know much more. I'd managed to get through college without taking a statistics class, which is what quantitative risk assessment is based on. I had to teach myself statistics and probability. I had to teach myself biology. I had to teach myself genetics. I had to go as deeply as I could into all of this stuff. And it took a long time.

You ended up publishing the book yourself, via Lulu.com. How did that come about?

I never talked to the editor about it directly, but when my agent called, she told me that he wanted to cancel the contract. What he had wanted was more sensational -- kind of a cross between "Silent Spring" and Michael Crichton, I guess. Which I thought was odd, because I've heard from lots of people that the evidence in "Intervention" really is that level of scary.

Anyway, my agent was the one who finally pushed me over the edge to publish it myself. I had friends who were pushing the idea, but I was worried she wouldn't represent me if I did, since there's no sure money in it. But when I asked her point-blank how she felt about it, she said, "Denise, you would never be willing to do what it would take to make this book what he wants, and you shouldn't. You'd lose credibility with your sources. You'd lose your self-respect. The book needs to be published. I support you. Go do it."

So I did. And I feel incredibly grateful that I'm alive at a moment in time when we have such tremendous tools available to us -- when somebody like me who's not afraid of technology can take a deep breath and listen to my friends' wise counsel, and just go ahead and publish it myself.

It can be done.

It can be done beautifully.


By Scott Rosenberg

Salon co-founder Scott Rosenberg is director of MediaBugs.org. He is the author of "Say Everything" and Dreaming in Code and blogs at Wordyard.com.

MORE FROM Scott Rosenberg


Related Topics ------------------------------------------

Author Interviews Biotechnology Books