Ten years ago our critical infrastructure was run by a series of specialized systems, both computerized and manual, on dedicated networks. Today, many of these computers have been replaced with standard mass-market computers connected via the Internet. This shift brings with it all sorts of cost savings, but it also brings additional risks. The same worms and viruses, the same vulnerabilities, the same Trojans and hacking tools that have so successfully ravaged the Internet can now affect our critical infrastructure.
For example, in late January 2003, the Slammer worm knocked out 911 emergency telephone service in Bellevue, Wash. The 911 data-entry terminals weren't directly connected to the Internet, but they used the same servers that the rest of the city used, and when the servers started to fail (because the connected parts were hit by Slammer), the failure affected the 911 terminals.
What's interesting about this story is that it was unpredicted. The Slammer attacked systems basically at random, and happened to knock over 911 service. This isn't an attack that could have been planned in advance. It was an accidental failure, and one that happened to cascade into a major failure for the citizens of Bellevue.
I have read article after article about the risks of cyberterrorism. They're all hype; there's no real risk of cyberterrorism. Worms and viruses have caused all sorts of network disruptions, but it's all been by accident. In January 2003, the SQL Slammer worm disrupted 13,000 ATMs on the Bank of America's network. But before it happened, you couldn't have found a security expert who understood that those systems were dependent on that vulnerability. We simply don't understand the interactions well enough to predict which kinds of attacks could cause catastrophic results.
More recently, in August 2003, the Nachi worm disabled Diebold ATMs at two financial institutions (Diebold declined to name which ones). These machines were running the Windows operating system, and were connected to the Internet. ATM machines that weren't running Windows were unaffected.
As mass-market computers and networks permeate more and more of our critical infrastructure, that infrastructure becomes vulnerable not only to attacks but also to sloppy software and sloppy operations. And these vulnerabilities are not necessarily the obvious ones. The computers that directly control the power grid (for example) are well protected. It's the peripheral systems that are less protected and more likely to be vulnerable. And a direct attack is unlikely to cause our infrastructure to fail, because the connections are too complex and too obscure. It's only by accident -- a worm affecting systems at just the wrong time, allowing a minor failure to become a major one -- that these massive failures occur.
Might this be what happened during the great blackout of this past summer?
The "Interim Report: Causes of the August 14th Blackout in the United States and Canada," published in November and based on detailed research by a panel of government and industry officials, blames the blackout on an unlucky series of failures that allowed a small problem to cascade into an enormous failure.
The Blaster worm affected more than a million computers running Windows during the days after Aug. 11. The computers controlling power generation and delivery were insulated from the Internet, and they were unaffected by Blaster. But critical to the blackout were a series of alarm failures at FirstEnergy, a power company in Ohio. The report explains that the computer hosting the control room's "alarm and logging software" failed, along with the backup computer and several remote-control consoles. Because of these failures, FirstEnergy operators did not realize what was happening and were unable to contain the problem in time.
Simultaneously, another status computer, this one at the Midwest Independent Transmission System Operator, a regional agency that oversees power distribution, failed. According to the report, a technician tried to repair it and forgot to turn it back on when he went to lunch.
To be fair, the report does not blame Blaster for the blackout. I'm less convinced. The failure of computer after computer within the FirstEnergy network certainly could be a coincidence, but it looks to me like a malicious worm.
No matter what caused the computer failures, the story is illustrative of what is to come. The computer systems we use on our desktops are not reliable enough for critical applications. Neither is the Internet. The more we rely on them in our critical infrastructure, the more vulnerable we become. The more our systems become interconnected, the more vulnerable we become.
It's not the power generation computers, it's the alarm computers. It's not the police and medical systems, it's the 911 computers that dispatch them. It's the computer you never thought about, that -- surprise -- is critical and critically vulnerable.