It starts on a cold November night. You went to sleep, comfortably warm, after listening to the late news: a nor’easter coming through, the worst storm in several years. You go to bed, quietly excited at the thought of the fairly certain snow day — build a snowman with the kids, maybe work through the email that has piled up, and do a little online shopping; after all Christmas is coming.
That’s not the way it works out, though — about 3 a.m., you awaken, cold. The house is too cold. You get out of bed — the hardwood floor icy against your feet — and when you flip the hall light switch, nothing happens. Odd, the power is out. Automatically, you look out the window and realize the whole neighborhood is dark; in fact, there is no sky glow — usually, you can see the red shimmer of New York City on a cloudy night. It’s darker than you’ve ever seen it.
Sounds like a Tom Clancy novel, doesn’t it? It’s all too realistic, though. This is based on a scenario that was war-gamed by the “U.S. Professionals for Cyber Defense” in the months after 9/11. I talked it over with Dr. John McHugh, Canada Research Chair in Privacy and Security of the faculty of computer science at Dalhousie University in Halifax, Nova Scotia, one of the members of the committee. They investigated whether or not there was a credible threat from a first-strike cyberattack. Their answer was frightening.
Railroads are largely controlled by computers; change a switch while a train is passing over it and you have an instant derail. Gas pipelines are also computer controlled; to my surprise, you can blow them up entirely by computer control — reverse the pumps on the ends, pressure builds up in the middle, and something, somewhere, will eventually give way.
Traffic flow, the electrical system, all much the same. To give the most effect, attack during a major storm — the nor’easter — and apply a few “kinetic” attacks (read “bombs”) at critical points. Dr. McHugh says they found the most credible attacks combined large-scale cyberattacks with a few small conventional acts of terrorism at vulnerable points, in order to surgically cause the most damage. The attacks were low effort, but high skill, and they could cripple the U.S. economy for years.
You have to fumble in the dark to find the phone; it’s dead. You try your cell phone; no service. And the house is getting colder.
You were better prepared than a lot of people: you have a portable radio and flashlight combination, and it’s even one of the ones that can be hand-cranked. It’s more work than you thought to crank it up, but now you’re getting nervous. You turn it on — and you need to search for a station. You finally find a distant station, CJCL in Toronto. They are reading news, in a hushed and controlled voice. Power out over large parts of the East Coast, in California, and across the Midwest. Explosions reported in Texas and Oklahoma, trains derailed all over the country, the tunnels into Manhattan closed. Telephone systems out over much of the country — and the president will be speaking soon. He’s been moved to a secret, secured location. Once again, like on September 11, 2001, the world wonders: is it war?
Is it war? If so, it’s a different war than any we’ve ever seen: there have been a few small, carefully calculated physical attacks, but almost all the damage has been done by people sitting at computers. And where are the computers? In China? Russia? Iran? Who knows?
This would be a whole new problem: Who do you hit back? We’d have a good idea before too long, but were they really government actors? Or were the attackers terrorists? (This might be even harder to tell if they didn’t bother with the few physical attacks. They might do a little less damage, but they’d leave a lot fewer traces.) Maybe they were just freelance hackers, as some people suspect attacked Estonia and Georgia. It could put the president in a very difficult position: Can he take any real steps to respond? Is a cyberattack even an act of war at all?
Time passes and the United States slowly begins to recover. The National Guard had to be called in to use their radios and vehicles to move food, but there were only a few thousand deaths all told. When the inevitable investigations begin, the big questions in everyone’s mind are: How did it happen? What should we have done to prevent it?
Obviously, we’d rather not have anything like this happen, but how can we prevent it? The Center for Strategic and International Studies looked at the question and presented a new report, “Securing Cyberspace for the 44th Presidency.”
They make some useful suggestions: establish a policy to define to the rest of the world what we would consider an “act of war,” develop a national response strategy, and establish new offices within the Department of Homeland Security — and, we presume, inside the Department of Defense — to coordinate the process.
Most of the suggestions in the CSIS report — and the USPCD suggestions from five years ago — are good ones, if not very surprising: more regulation, using the government’s purchasing policies to drive the market, and more research and the development under the “cybersecurity czar” of active government defenses and responses.
I don’t think the bureaucratic answer is the most effective one, though. The truth is we’re in this position because for the last 30 years no one has really cared about security. (Well, almost no one: my colleagues in the computer security world do, of course. But we’ve had to struggle for funding and attention.) Knowing that no one really cared, the software vendors have written “end user license agreements” that basically promise when you open the shrink wrap, you will find a box inside.
So I have a little proposal. The best solution, and the most honest one, is to make the system reward good behavior and punish bad, and there’s no better method of that than making failure expensive.
Instead of — or along with — this CSIS proposal, let’s make a change in the product liability law. After some date certain, say in five years, make suppliers of critical software and systems liable for consequential damages.
A law that made the vendor liable for damages when a failure in the software let an attacker do damage would mean that companies (yes, I’m talking to you, Microsoft) would have a much greater impetus to make systems more secure, and would also give the companies whose systems are secure (when did you last here of a damaging Macintosh or UNIX virus?) a greater advantage in the marketplace. That would be the most certain way of making the software companies pay attention to security, and that’s the key to preventing a damaging cyberwar attack.