The myth of interference

Internet architect David Reed explains how bad science created the broadcast industry.

Topics:

The myth of interference

There’s a reason our television sets so outgun us, spraying us with trillions of bits while we respond only with the laughable trickles from our remotes. To enable signals to get through intact, the government has to divide the spectrum of frequencies into bands, which it then licenses to particular broadcasters. NBC has a license and you don’t.

Thus, NBC gets to bathe you in “Friends,” followed by a very special “Scrubs,” and you get to sit passively on your couch. It’s an asymmetric bargain that dominates our cultural, economic and political lives — only the rich and famous can deliver their messages — and it’s all based on the fact that radio waves in their untamed habitat interfere with one another.

Except they don’t.

“Interference is a metaphor that paints an old limitation of technology as a fact of nature.” So says David P. Reed, electrical engineer, computer scientist, and one of the architects of the Internet. If he’s right, then spectrum isn’t a resource to be divvied up like gold or parceled out like land. It’s not even a set of pipes with their capacity limited by how wide they are or an aerial highway with white lines to maintain order.

Spectrum is more like the colors of the rainbow, including the ones our eyes can’t discern. Says Reed: “There’s no scarcity of spectrum any more than there’s a scarcity of the color green. We could instantly hook up to the Internet everyone who can pick up a radio signal, and they could pump through as many bits as they could ever want. We’d go from an economy of digital scarcity to an economy of digital abundance.”

So throw out the rulebook on what should be regulated and what shouldn’t. Rethink completely the role of the Federal Communications Commission in deciding who gets allocated what. If Reed is right, nearly a century of government policy on how to best administer the airwaves needs to be reconfigured, from the bottom up.

- – - – - – - – - – - -

Spectrum as color seems like an ungainly metaphor on which to hang a sweeping policy change with such important social and economic implications. But Reed will tell you it’s not a metaphor at all. Spectrum is color. It’s the literal, honest-to-Feynman truth.



David Reed is many things, but crackpot is not one of them. He was a professor of computer science at MIT, then chief scientist at Software Arts during its VisiCalc days, and then the chief scientist at Lotus during its 1-2-3 days. But he is probably best known as a coauthor of the paper that got the Internet’s architecture right: “End-to-End Arguments in System Design.

Or you may recognize him as the author of what’s come to be known as Reed’s Law — which says the true value of a network isn’t determined by the number of individual nodes it connects (Metcalfe’s Law) but by the far higher number of groups it enables. But I have to confess that I’m biased when it comes to David Reed. I first encountered him in person three years ago at a tiny conference when he deftly pulled me out of a hole I was digging for myself in front of an audience of my betters. Since then, I’ve watched him be bottomlessly knowledgeable on a technical mailing list and patiently helpful as a source for various articles I’ve worked on.

It doesn’t take much to get Reed to hold forth on his strong, well-articulated political and social beliefs. But when it comes to spectrum, he speaks most passionately as a scientist. “Photons, whether they are light photons, radio photons, or gamma-ray photons, simply do not interfere with one another,” he explains. “They pass through one another.”

Reed uses the example of a pinhole camera, or camera obscura: If a room is sealed against light except for one pinhole, an image of the outside will be projected against the opposite wall. “If photons interfered with one another as they squeezed through that tiny hole, we wouldn’t get a clear image on that back wall,” Reed says.

If you whine that it’s completely counterintuitive that a wave could squeeze through a pinhole and “reorganize” itself on the other side, Reed nods happily and then piles on: “If photons can pass through one another, then they aren’t actually occupying space at all, since the definition of ‘occupying’ is ‘displacing.’ So, yes, it’s counterintuitive. It’s quantum mechanics.”

Surprisingly, the spectrum-as-color metaphor turns out to be not nearly as confounding to what’s left of common sense. “Radio and light are the same thing and follow the same laws,” Reed says. “They’re distinguished by what we call frequency.” Frequency, he explains, is really just the energy level of the photons. The human eye detects different frequencies as different colors. So, in licensing frequencies to broadcasters, we are literally regulating colors. Crayola may own the names of the colors it’s invented, and Pantone may own the standard numbers by which digital designers refer to colors, but only the FCC can give you an exclusive license to a color itself.

Reed prefers to talk about “RF [radio frequency] color,” because the usual alternative is to think of spectrum as some large swatch of property. If it’s property, it is easily imagined as finite and something that can be owned. If spectrum is color, it’s a lot harder to think of in that way. Reed would recast the statement “WABC-AM has an exclusive license to broadcast at 770 kHz in NYC” to “The government has granted WABC-AM an exclusive license to the color Forest Green in NYC.” Only then, according to Reed, does the current licensing policy sound as absurd as it is.

But if photons don’t interfere, why do our radios and cellphones go all crackly? Why do we sometimes pick up two stations at once and not hear either well enough?

The problem isn’t with the radio waves. It’s with the receivers: “Interference cannot be defined as a meaningful concept until a receiver tries to separate the signal. It’s the processing that gets confused, and the confusion is highly specific to the particular detector,” Reed says. Interference isn’t a fact of nature. It’s an artifact of particular technologies. This should be obvious to anyone who has upgraded a radio receiver and discovered that the interference has gone away: The signal hasn’t changed, so it has to be the processing of the signal that’s improved. The interference was in the eye of the beholder all along. Or, as Reed says, “Interference is what we call the information that a particular receiver is unable to separate.”

But, Reed says, “I can’t sign on to ‘It’s the receiver, stupid.’” We have stupid radios not because we haven’t figured out how to make them smart but because there’s been little reason to make them smart. They’re designed to expect signal to be whatever comes in on a particular frequency, and noise to be everything on other frequencies. “The problem is more complex than just making smart radios, because some of the techniques for un-confusing the receiver are best implemented at the transmitter, or in a network of cooperating transmitters and receivers. It’s not simply the radios. It’s the systems architecture, stupid!”

One of the simplest examples of an architecture that works was invented during World War II. We were worried that the Germans might jam the signals our submarines used to control their radio-controlled torpedoes. This inspired the first “frequency-hopping” technology: The transmitter and receiver were made to switch, in sync, very rapidly among a scheduled, random set of frequencies. Even if some of those frequencies were in use by other radios or jammers, error detection and retransmission would ensure a complete, correct message. The U.S. Navy has used a version of frequency-hopping as the basis of its communications since 1958. So we know that systems that enable transmitters and receivers to negotiate do work — and work very well.

So what architecture would Reed implement if he were king of the world or, even less likely, chairman of the FCC?

Here Reed is dogmatically undogmatic: “Attempting to decide what is the best architecture before using it always fails. Always.” This is in fact a one-line recapitulation of the end-to-end argument he and his coauthors put forward in 1981. If you want to maximize the utility of a network, their paper maintained, you should move as many services as feasible out of the network itself. While that may not be as counterintuitive as the notion of photons not occupying space, it is at least non-obvious, for our usual temptation is to improve a network by adding services to it.

That’s what the telephone companies do: They add Caller I.D., and now their network is more valuable. We know it’s more valuable because they charge us more for it. But the end-to-end argument says that adding services decreases the value of a communications network, for it makes decisions ahead of time about what people might want to do with the network. Instead, Reed and his colleagues argued, keep the network unoptimized for specific services so that it’s optimized for enabling innovation by the network’s users (the “ends”).

That deep architectural principle is at the core of the Internet’s value: Anyone with a good idea can implement a service and offer it over the network instead of having to propose it to the “owners” of the network and waiting for them to implement it. If the phone network were like the Internet, we wouldn’t have had to wait 10 years to get caller I.D.; it would have been put together in one morning, implemented in the afternoon, and braced for competitive offerings by dinnertime.

For Reed the question is, What is the minimum agreement required to enable wireless communications to be sorted out? The less the system builds into itself, the more innovation — in ideas, services and business models — will arise on the edges.

There is active controversy, however, over exactly how much “hand shaking” protocol must be built in by the manufacturer and required by law. Reed believes that as more and more of radio’s basic signal-processing functions are defined in software, rather than etched into hardware, radios will be able to adapt as conditions change, even after they are in use. Reed sees a world of “polite” radios that will negotiate new conversational protocols and ask for assistance from their radio peers.

Even with the FCC removed from the center of the system so that the “ends” can dynamically negotiate the most efficient connections, Reed sees a continuing role for government involvement: “The FCC should have a role in specifying the relevant science and technology research, through the NSF [National Science Foundation]. There may even be a role for centralized regulation, but it’s got to focus on actual problems as they arise, not on theoretical fantasies based on projections from current technology limits.”

It’s clear in speaking with Reed that he’s frustrated. He sees an economy that’s ready to charge forward economically being held back by policies based on the state of the art when the Titanic sank. (That’s literally the case: The government gave itself the right to license the airwaves in 1912 in response to the Titanic’s inability to get a clear help signal out.) Key to the new generation, according to Reed, are software-defined radios. An SDR is smart precisely where current receivers are dumb. No matter how sophisticated and expensive the receiver in your living room is, once it locks on to a signal it knows how to do only one thing with the information it’s receiving: treat it as data about how to create subtle variations in air pressure. An SDR, on the other hand, makes no such assumption. It is a computer and can thus treat incoming data any way it’s programmed to. That includes simultaneously receiving two signals on separate frequencies from the same source, as demonstrated by Eric Blossom, an engineer on the GNU Radio project.

Of course, an SDR doesn’t have to treat information as encoded sounds at all. For example, says Reed, “when a new Super-Frabjoulous Ultra-Definition TV network broadcasts its first signal, the first bits it will send would be a URL for a Web site that contains the software to receive and decode the signals on each kind of TV in the market.”

But SDR addresses only one component. Reed sees innovation all across the spectrum, so to speak. He and his fellow technologist, Dewayne Hendricks, have been arguing for what they call “very wide band,” a name designed to refer to a range of techniques of which “ultra-wide band” (UWB) is the most familiar. Ultra-wide band packs an enormous amount of information into very short bursts and transmits them across a wide range of frequencies: lots of colors, lots of information. Reed says: “The UWB currently proposed is a simple first step. UWB transceivers are simple and could be quite low-cost. And UWB can transmit an enormous amount of information in a very short burst — for example, a whole DVD could be sent to your car from a drive-through, fast movie-takeout stand.” Other very-wide-band techniques, not yet as well developed as UWB, spread energy more smoothly in time and, Reed believes, are more likely to be the basis of highly scalable networks.

Given Reed’s End-to-End commitment, it should be clear that he’s not interested in legislating against older technologies but in helping the market of users sort out the technology they want. “Our goal should be to enable a process that encourages the obsolescence of all current systems as quickly as economically practicable. That means that as fast as newer, better technology can be deployed to implement legacy functions, those legacy functions should go away due to competition.” In other words, you’ll be able to pick up NBC’s “West Wing” signal on your current TV until so many people have switched to the new technology that broadcasters decide to abandon the current broadcast techniques. “People didn’t have to be legislated into moving from the Apple II. They did it voluntarily because better technology emerged,” Reed says.

But ultimately Reed isn’t in this because he wants us to have better TVs or networked digital cameras. “Bad science is being used to make the oligarchic concentration of communications seem like a fact of the landscape.” Opening the spectrum to all citizens would, according to him, be an epochal step in replacing the “not” with an “and” in Richard Stallman’s famous phrase: “Free as in ‘free speech,’ not free as in ‘free beer.’ Says Reed: “We’ve gotten used to parceling out bits and talking about ‘bandwidth.’ Opening the spectrum would change all that.”

But surely there must be some limit. “Actually, there isn’t. Information isn’t like a physical thing that has to have an outer limit even if we don’t yet know what that limit is. Besides advances in compression, there’s some astounding research that suggests that the informational capacity of systems can actually increase with the number of users.” Reed is referring to work by researchers in the radio networking field, such as Tim Shepard and Greg Wornell of MIT, David Tse of UC-Berkeley, Jerry Foschini of Bell Labs, and many others, as well as work being carried out at MIT’s Media Lab. If this research fulfills its promise, it’s just one more way in which the metaphor of spectrum-as-resource fails and misdirects policy.

“The best science is often counterintuitive,” says Reed. “And bad science always leads to bad policy.”

David Weinberger is the coauthor of "The Cluetrain Manifesto" and the author of "Small Pieces Loosely Joined." His home page is here.

More Related Stories

Featured Slide Shows

  • Share on Twitter
  • Share on Facebook
  • 1 of 26
  • Close
  • Fullscreen
  • Thumbnails

Comments

0 Comments

Comment Preview

Your name will appear as username ( settings | log out )

You may use these HTML tags and attributes: <a href=""> <b> <em> <strong> <i> <blockquote>