Cities without landmarks
Niagara Falls, U.S./Canada
Any author striking out in a new direction following one or more successes in a given genre can expect to face the question, “What made you decide to try this?” And having elected, after writing two historical novels that were received by the public with the kind of enthusiasm that is, to say the least, humbling, to turn my sights on speculative fiction — specifically, on a novel set a quarter of a century in the future — I expected to hear that question. What I have been unprepared for is the underlying anger that has sometimes tinged it. It has not so much been a case of “Why did you do this?” as “What the hell is your problem?” After much thought, I believe I may understand where this feeling is coming from.
“Killing Time,” my latest book, has been repeatedly described as “dystopian,” a word I confess I don’t quite understand; but it cannot be denied that the novel paints a rather harsh picture of the immediate future, and specifically of the nature and possible long-term effects of the triumph of information technology. It is my belief, for which I offer no apology, that most of that technology is making people dumber: It is teaching them how to assemble massive amounts of information, of arcane minutia, without simultaneously teaching them how to assemble those bits of information into integrated bodies of knowledge — such integration being the only function that distinguishes the human brain from a mechanical computer.
And lest anyone waste a free moment worrying about this dilemma, information technology bombards us so constantly with entertainment and marketing that quiet, objective consideration of our fate often becomes impossible. This leads to a society in which each member is increasingly concerned with the satisfaction of his or her own material appetites, and less and less concerned with the philosophical problems and principles that underlie the successful creation and maintenance of a civil society. The result? A downward spiral into a very uncivil state, one in which the public interest takes a distinct back seat to public diversion.
In “Killing Time” I go on to say that if this condition goes unchecked for an additional 20 or even 10 years, what we will see is the triumph of corporate interests, the deterioration of educational, environmental and public health programs, and increased violence in those parts of our country and our world that are left behind in the information-generated scramble for wealth and material satisfaction. What can possibly check such developments? Only one thing, I believe: vigorous but enlightened government regulation, the same kind of regulation that Theodore Roosevelt initiated 100 years ago to check the spiral of the United States into a nation where the rich were served by a laboring class that had no right or reason to expect reasonable working hours or standards, decent food, drugs or housing, or even remotely honest politicians.
If you have made it this far in my apologia, chances are you are already groaning. What? you may well be saying. Information technology is bad? And government regulation is the answer? But information technology has generated so much wealth, from the stock market to porn peddlers to dot-coms! And government regulation — I thought we were trying to get less of that in our lives. Why are you, Mr. Carr, trying to rain on a parade that has made so much money for so many people and entertained such vast audiences by bringing in the tired old horse known as government regulation?
What the hell is your problem, anyway?
My problem, briefly put, is this: Believing what I do about the nature of information assimilation, on an individual as well as a mass scale — believing, as I state in the book, that “information is not knowledge” — and watching an entire generation of young people grow up to become virtual machines capable of storing informational bits like biocomputers but not of assembling those bits into meaningful bodies of knowledge, I thought long and hard about ways to make information technology more socially beneficial. But in the end, I could not escape one central dilemma: Only two forms of regulation are available in the United States: governmental and corporate.
Government regulation is admittedly imperfect and often infuriating; but it must at least try to work toward the public good, or its authors will lose their power. Corporate regulation, on the other hand, knows only one purpose: profit. A hundred years ago we could not rely on the new corporations that were making the country fantastically wealthy to regulate themselves, to limit their workers’ hours, to protect children or to guarantee the quality of their products. Similarly we cannot today expect the masters of information technology to behave in a socially responsible manner if that stands in the way of their bottom line. Put simply and a bit crudely, the operators of the Internet can never be expected to agree to regulations that might obstruct the online activities of child molesters if such regulations would make it difficult for those same companies to reach the children that form such a large part of their customer bases. It simply will not happen.
What we need are content regulations that are roughly similar to those regarding obscenity and the exploitation of children in other media; and there should be individual-use regulations, perhaps even licensing requirements, for visiting some sites, such as the plethora of supposed “special interest” sites that thinly veil pedophiliac content. Convicted sex criminals and other questionable characters simply should not be allowed access to sites that make it easier for them to stalk and approach their prey — not until they have demonstrated, through a licensing procedure, some sort of rehabilitation, and maybe not even then. This can be a private process, unlike proclaiming to whole neighborhoods that a sex offender has moved in (a law that, while arguably effective, raises troubling constitutional issues). Indeed, there are many areas of the Net for which application and licensing should perhaps be required. Pornography is too rampant and too available to any kid who can “borrow” a credit card or simply surf the web, as are gambling sites, sketchy chat rooms, etc. Licenses and passwords could help a lot of this.
So far so good: We are talking about material that is obviously dangerous and has been criminalized in other areas. But what about material that is just as dangerous, I think, but is much more difficult to regulate — which is to say disinformation, as well as information that is forbidden in other media but sneaks onto the Net? How do we tackle the problem of people knowingly putting false facts on Web sites or releasing potentially dangerous material that is regulated in some other forms but, since we lack an Internet watchdog agency, can find its way onto the Net without obstruction?
Our central problem here is that the Net has so far been regulated in much the same way as print has been. This is a mistake. Television and radio are governed by the Federal Communications Commission, ostensibly because airwave space is limited and is viewed as a public resource. But anyone who has ever worked in either medium knows that the actual concerns of the FCC extend beyond this minimal mandate and into the realm of responsibility — that is, into ensuring that networks do not use their considerable influence to perpetrate what could be massive frauds and deceptions of the public and do not behave in a way that is wantonly at variance with obscenity and other content laws. Why does it thus hold the airwaves to so much more stringent a set of standards than those that govern the print medium, which are really nothing more than basic libel statutes (although sometimes even print runs into content trouble, usually for obscenity)? Because there is a general recognition that radio and television, being far more pervasive and inescapable than print, must also be more accountable.
This pervasiveness is even more true of the Internet, not least because the Internet sells itself as, and in fact is, so very much more than a mere entertainment or news medium: it is also a research library, a marketplace, and a schoolroom. Given these additional roles, there is no reason to maintain that the Internet is entitled to the same First Amendment protection as print or even radio and television. The Internet should therefore cease to be governed by such undeniably loose rules and instead be overseen by an agency that would more closely resemble the FCC but have even broader power, specifically the power of prior restraint. Many would argue that this violates the First Amendment, but it is time for an open recognition that not only is the Internet an information delivery system of unprecedented power, it is also many other things — and serves many other purposes. It therefore requires unprecedented attempts to assure the veracity of the content it purveys and to protect those who use it. And if that means suspending full First Amendment protection from the Internet, so be it.
I do not see the government playing a moral role so much as a verification and attribution assurance role — regulating the Internet should be like regulating food and drugs as much as it should be like regulating books, TV and movies. A hundred years ago, people had no way of determining the quality of the food and drugs that went into their bodies. Now they cannot determine what sources or forces may be behind the information that goes into their minds. Regulation is desperately needed to prevent widespread, even general, mental and intellectual poisoning of the public.
People assume that what they read on the Net is true. There must be strenuous efforts first and foremost to guarantee that what is represented as fact is fact, and that what is not fact is clearly labeled as such. Right now I could put up a Web site concerning just about anything and say just about anything so long as I attached a disclaimer. But disclaimers can be so well hidden (sometimes even in the code of the site) that they amount to nothing. We should openly recognize that many Net users do not possess the technical skills to detect such deceptions; therefore they need help, just as they need help with radio and television. (A crude and obvious example is the “Blair Witch” site — not much effort was needed to discover that it was a promotional gimmick, but that was evidently more effort than most people were willing or had the ability to make.) We must have new verification and attribution statutes that are vigorously enforced by empowered agencies, with real punishments for violators.
I don’t think this is a particularly complex concept — but it will be difficult to implement. A federal agency is unquestionably required, although the further argument in my book is that one will never be established: The con job being pulled off by the information companies is too good. They paint themselves as the guardians of free speech, as do the politicians who support them, when really it’s about profits for the companies and about graft for the politicians.
OK, so how do you use all this regulation to turn information into knowledge? That is trickier, of course, and becomes a more individual-oriented philosophical issue. But again, I think the key is verification and attribution enforcement — laws that provide stiff penalties for those who misrepresent information as true when it is false or who do not label fictions as such upfront. If we have a government agency continuously making sure that supposed research presented on the Internet is factual; if site owners can be held legally responsible for disinformation they may unwittingly disseminate (much in the way that if you unknowingly buy stolen goods you can still become an accessory after the fact); and if all images, data and text have to be accompanied by authorship or provenance information, or simply can’t be posted; if these and many similar rules are laid down — I think you can begin to get a picture of the very different kind of Net that would ensue.
Does this mean that information will necessarily become knowledge? No, but it will be a big step. Still, a lot of that process is up to the individual. It’s just that the individual can’t even make the decision to strive for knowledge if he or she has no way to judge the quality of the information he or she is receiving.
Moving beyond the recognition that the Internet is a very different animal from print, I don’t really know if new kinds of corporate regulation would be required to achieve these goals. Certainly it is sinister that so few companies control so much information and entertainment, and there is no reason to think that they operate any differently than their blue-chip ancestors — observe how Napster has revealed its true colors as just another attempt to beguile members of the public with claims of being on “their side,” only to turn around and try to bilk them. But this is true of all industries. What’s troubling here is that you’re dealing with delivery systems that so directly affect what people learn and therefore what they choose to believe. So the need for having multiple hands at the helm is all the greater. Do we need new antitrust and anti-monopoly rules for the Internet, along with new quality-control regulations? Not if the existing rules are stringently enforced. But then, what are the odds of that? So maybe new codes will be necessary.
Am I ultimately saying that the government should shut down the Drudge Report because it’s irresponsible and specious? Absolutely not. I’m saying there should be an agency in place that would terrify Matt Drudge into vetting his reports and not publishing hearsay unless it is labeled as such. If such an agency existed, would Drudge, like the snake oil salesman he is, eventually be driven out of business by the reduced sensationalism of his product? Perhaps. And I can’t say I’d shed a tear. But he wouldn’t have been shut down by the government.
Government regulation of information technology is nowhere in sight, however. Politicians fall all over themselves to prove that they will regulate less than their opponents, and the public has somehow become convinced that this lack of regulation is in their, rather than business’s, best interests. All of which instilled in me a desire to issue a warning, a warning as to where I believed the path we are on must inevitably lead. And that is what I consider “Killing Time”: not a “dystopian” novel but a warning novel; not an ivory tower analysis but an engaged and passionate commentary.
Warnings are by nature concerned with consequences; therefore speculative, futuristic fiction, rather than another look backward, seemed the appropriate genre. But there was a further reason for framing this warning in futuristic terms: As I learned with both “The Alienist” and “The Angel of Darkness,” historical novels, even when they concern horrendous subjects, offer readers a certain zone of comfort, the comfort of knowing they are reading about times gone by, and the comfort of believing — even if it is a delusional belief — that things have changed. I could not allow such a comfort zone to be generated by “Killing Time,” not if I wanted the warning to hit home.
And so, readers, if you feel irritated, disturbed and upset by “Killing Time,” be secure that you are meant to; but I hope you will also feel entertained, amused and touched. Perhaps it’s perverse to try to spark such a contradictory array of reactions; perhaps, when all’s said and done, that’s my real problem.
Caleb Carr is the author of three novels, including "The Alienist" and, most recently, "Killing Time."More Caleb Carr.
Niagara Falls, U.S./Canada
Sydney Opera House, Sydney, Australia
Mount Rushmore, South Dakota, U.S.
Eiffel Tower, Paris, France
Colosseum, Rome, Italy
Taj Mahal, Agra, India
Siena Cathedral, Siena, Italy
Christ the Redeemer, Rio de Janeiro, Brazil
Arc de Triomphe, Paris, France
Lost City of Petra, Jordan