Winter 2005 vol 4.1
 
 
 
 
 
An Interview with Bruce Schneier
BRUCE SCHNEIER is an internationally renowned security technologist and author. Described by The Economist as a "security guru," Schneier is best known as a candid and lucid security critic and commentator. He has written articles for, among other publications, Boston Globe, San Francisco Chronicle, Sydney Morning Herald, International Herald Tribune, The Baltimore Sun, Newsday, Salon.com, Wired Magazine, and San Jose Mercury News. He is also the founder and CTO of Counterpane Internet Security, Inc., the world's leading protector of networked information—the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats.

Schneier's book publications include Beyond Fear: Thinking Sensibly About Security in an Uncertain World; Secrets & Lies: Digital Security in a Networked World; Applied Cryptography; Protect Your Macintosh; E-Mail Security; Practical Cryptography (with co-author Niels Ferguson); and The Electronic Privacy Papers: Documents on the Battle for Privacy in the Age or Surveillance (with co-author David Banisar).

Schneier also publishes a free monthly newsletter, Crypto-Gram (http:// www.schneier.com/crypto-gram.html), which counts over 100,000 readers. Additionally, Schneier maintains a weblog, covering security and security technology issues. You can read all of his writings on security at http://www. schneier.com/.

turnrow's Assistant Editor Claudia Grinnell interviewed Bruce Schneier in December of 2004.

Fear's a funny thing. In the comedy Defending Your Life, Al Brooks defends the actions of his life in the court of a superior life form, one that calls us humans "little brains" because we only use a tiny percentage of our brains while they boast brain usage numbers in the 50 and 60 percentages. Brooks is called to defend nine days of his life, days in which his actions were judged fear-based by his prosecutor. Brooks' actions on those days reveal a man frozen into indecision or bad decision by fear, even though his defense attorney tries to spin the actions positively. It becomes obvious that Brooks' character is somehow deficient and that he has to be recycled into humanity for another round, another chance to live fearlessly, until he is ready to move to a "higher" level. The lesson of the film is clear: fear is bad, lack of fear is good because it is unselfish, not driven by individual, egoic concerns. Egoistically, we are unable to act free and lovingly, and instead act small and defensively. Backed into the defensive, fear-based corner, we make the wrong choices. We want to play it safe, and in doing so, we play it wrong.

I seriously considered buying a gun a few months ago. And possibly getting a dog, too. A loud, big, muscular dog. And most certainly a much bigger car than my little underpowered matchbox on wheels. Parked between a Hummer and a Land Cruiser one day, I felt crushed.

The gun idea stuck around the longest. I rationalized it like this: I'd feel safer with it in the house because if I ever needed to defend myself and mine, I shouldn't have to wait for the police to arrive. I really had no clear idea against whom I had to defend myself. After all, I have no personal enemies that I know of. But there was, in the back of my mind, a shadowy, dangerous figure that wanted to do me in, get my stuff.

I really can't say exactly why I haven't bought a gun yet. On some level, it still makes sense to me to own one; part of me really believes that. And that part tries to overpower the part that says, hey, wait a minute—what exactly do you think the chances are you will get attacked, or that you wouldn't shoot yourself accidentally? People who are afraid make bad choices. In a moment of wild panic I might shoot what looks like an intruder but is merely my husband, getting a late snack in the kitchen. On the other hand, it might be an intruder. I'd only know for certain after the fact. And the fear of making a wrong choice would probably add to my sense of panic.

Everybody likes to think well of himself. We like to think of ourselves as basically decent people, with human flaws, naturally, but nothing totally aberrant; nothing, for example, in the league with people who fly airplanes into buildings or who gun down children in schools. We like to think of ourselves as rational, forward-looking, generous folks. It's "the others" who are bad, evil. We want to protect ourselves against "them," because we are, let's face it, afraid—afraid that something might happen. Nobody is exactly sure what this something is: Shoe bombers, anthrax, outsourcing, downsizing, snipers, tainted fruit, cancer-causing agents, earthquakes, al Qaeda, mass graves, no healthcare, stocks falling 30 percent, hurricanes, jobless recovery, yellowcake, condition orange, yellow, red, and, of course, sharks.

There sure is a lot of talk. When we talk these days about security, it seems that we focus on national security. But if there is a sense that our nation feels unsafe or vulnerable, it is because our personal aggregate fears, insecurities, and worries saturate and perfume our being. It is in this context of individual and communal fears that I spoke via e-mail with Bruce Schneier.

Early in Beyond Fear, Schneier writes, "We need to move beyond fear and start making sensible security trade-offs," so, naturally, one of my first questions was how to convince people to think of security in terms of trade-offs. His response, that "people already think about security in terms of trade-offs," was illustrated by the idea of wearing a bulletproof vest. Arguing that they work, but that we don't wear one because we don't think "the added security is worth the trade-offs: the cost, the inconvenience, the fashion faux pas" he continued to say that people knew that "permanently grounding commercial aircraft would make us safer, but they also know that it's too extreme a trade-off. They also realized that, in the days following the 9/11 tragedy, it was a perfectly reasonable temporary security trade-off."

In Beyond Fear, Schneier presents a set of questions to rationally assess the security process, in other words, how to get beyond fear:

Step 1: What assets are you trying to protect?
Step 2: What are the risks to those assets?
Step 3: How well does the security solution mitigate those risks?
Step 4: What other risks does the security solution cause?
Step 5: What costs and trade-offs does the security solution impose?
And finally: Is the countermeasure worth it?

Claudia Grinnell: Explain to me how people work with this set of questions.

Bruce Schneier: People certainly don't go through this thought process explicitly in their head, but people certainly have a natural intuition regarding security trade-offs. All living creatures do. Imagine a rabbit is in a field eating grass and he sees a fox. "What assets is he trying to protect?" His butt. "What are the risks?" He'll get eaten. "How well does his security solution mitigate those risks?" Well, his security solution is running away, so that depends on how far he is from his hole, how fast he is, and how close the fox is. "What other risks does running cause?" As prey, he knows that as soon as he moves, he's more likely to be noticed. "What costs and trade-offs does the solution impose?" Mostly it's the opportunity costs. He's eating grass because he's hungry, and if he runs and hides he won't get to eat. Of course, the rabbit isn't going to go through those five steps in his head, but he will make a stay-or-flee decision. And if you think about it, the rabbits that make this decision well are more likely to reproduce, and the rabbits that don't are more likely to get eaten or starve.

The point of my five steps is to slow the thought process down. By making an intuitive process analytical, we can explicitly examine what we're thinking and why. That way it can be better—and more rationally— discussed and debated.

CKG: Well, the rabbit sometimes does gets eaten, no? If it's an intuitive process at work, why do people seem to get it so wrong sometimes?

BS: There are two answers. The first is that no security countermeasure is ever perfect. Life itself is risk, and all the rabbit can do is reduce the probability that the fox will eat him. That's true for us, too. We can never be absolutely secure. We can't completely eliminate the risk of crime or terrorism. All we can do is tip the odds in our favor.

But there's another, more interesting, answer. Central to the trade-off decision is a concept of risk, and people's perceptions of risk rarely match the reality of risk. Like the DC sniper example, people simply don't understand the true extent of the risk, and thus either trade off too much or too little.
There are lots of psychological studies that shed a light on this phenomenon. In Beyond Fear, I talk about five common fallacies:

1. People exaggerate spectacular but rare risks, and downplay common risks.
2. The unknown is perceived to be riskier than the familiar.
3. Personified risks are perceived to be greater than anonymous risks.
4. People overestimate involuntary risks: risks in situations they can't control.
5. People overestimate risks that they can't control but think they should.

Look at these fallacies, and you can imagine how a species could evolve these sorts of biases. Species have two basic means of security when it comes to reproduction. They either have millions of offspring and hope a few of them survive, like lobsters, or they have very few offspring and devote a lot of energy into protecting and rearing them. Human beings use the second strategy, and a general risk aversion makes sense. We're safer if we are overly wary of the unknown, or of some spectacular disaster we just witnessed. As animals, we make security trade-offs based on our immediate environment. We do it either through instinct or through intelligence, and the bias in either case is toward survival.

Unfortunately, there are two aspects of modern society that throw this all out of whack. The first is technology. Our security intuition evolved in a world where nothing ever changed. Fear of the new made a lot of sense in that kind of world. But the pace of today's technology means that things change all the time. Look at the Internet: every week there's a new attack tool, a new vulnerability, a new danger. Every year your computer and networks change in some radical way. Technology means that your car, your house, and your bank accounts all have features that they didn't have ten years ago, and people simply don't have the detailed expertise to make sensible security trade-offs about them.

The second problem is the media. Modern mass media has degraded our sense of natural risk, by magnifying the rare and spectacular and downplaying the common and ordinary. If we're wired to make our security trade-offs based on our sensory inputs, media gives us a wildly skewed view of the world. It's why people fear airplane crashes and not car crashes, even though the risk from the latter is considerably higher.

CKG: About this explicit examination of what we're thinking and why. . . . If we are making our thinking visible, transparent, we are likely also making it visible to those against whom we are trying to protect ourselves. What's the cost associated with that?

BS: Less than you think. The bad guys are going to optimize their attack strategy as best they can. As defenders, we can do a better or worse job securing ourselves. Doing a better job publicly is more secure than doing a worse job in secret. Think about it. Would you rather have door locks that a burglar can break with ease, or strong door locks that the burglar knows are harder to break? As prey, do you care if the fox knows how you decide to stay or flee? Not really. What you want is to make the best possible security trade-offs.

CKG: Are security trade-offs necessarily viewed in terms of cost? Isn't it wrong to look at these trade-offs only in terms of money?

BS: Money is certainly a large part of it, and we make security trade-offs in terms of money all the time. Burglar alarms became more popular when they became wireless, and cheaper. Companies decide on various building security programs at least partly based on price. And there are lots of counterterrorist security measures we're not going to do as a nation simply because they're too expensive.

But I mean "cost" very generally here. Security countermeasures cost money, but they also cost time, convenience, accessibility, social status, privacy, civil liberties, freedoms. We might decide not to put our jewelry in a bank safe-deposit box because the "cost" in convenience is too great. We might decide not to allow the government to eavesdrop on all telephone calls because the "cost" in privacy is too great. In department stores, most shoplifting occurs in dressing rooms. A store could increase its security by putting cameras in all dressing rooms, but the "cost" in customer outrage would be too great.

CKG: Does that mean there's just one right answer? One size does not fit all, does it?

BS: Oh no. There are legitimate differences that people can have in trade-off analysis. There are both legitimate differences in risk analysis—aside from the irrational biases above—and legitimate differences in costs. You might decide that the inconvenience cost of having a home burglar alarm is worth it, while I might not. You might not care if airlines search your baggage, while I might not want to have to get to the airport an hour earlier than before, or have strangers rifling through my personal belongings.

There's a lot of real debate that we should be having on the national scale about securing our society against terrorism. And if we started framing national security issues in terms of these five steps, we could have them.

For example, reinforcing airplane cockpit doors is an excellent example of a countermeasure that's worth it. The asset being protected is well-understood, and there's a real risk. The countermeasure effectively reduces the risk, doesn't add any new problems, and requires minimal trade-offs: it's cheap, it doesn't affect the airline business in any way, and there are no civil liberties issues.

CKG: You don't see people engaging in a real debate on the national level?

BS: Not really. There's a lot of irrationality about security these days, especially about national security. Many of the security measures we're seeing post-9/11 simply don't make sense. There are people who blindly believe that they're all vital to our nation's security, and there are others that are just as convinced they're all ridiculous. And the two sides can't seem to talk to each other.

Some of this conies from legitimate differences in some of those five steps: differing beliefs about how serious the threat is, differing beliefs about how onerous the trade-offs are, and so on. But most of it is for another reason: security decisions are most often made for non-security reasons.

This concept is vital to understanding security decisions and how they're made. If you see a security decision you believe to be irrational, it's because you don't understand the context of the decision. Security is often a small part of a larger decision, and the non-security trade-offs often trump the actual security trade-offs. So while I see a lot of irrational national security decisions made by government, I can understand these decisions by figuring out the real motivations behind them.

CKG: I want to come back to that last idea: security decisions being made for non-security reasons, but let's, for a moment, talk about what seems to make much sense to many people, racial profiling. At least on the surface this idea to give extra attention to a certain demographic one has identified as "the enemy" makes sense. In the war on terror, we seem to have identified the enemy fairly narrowly: male, young, Middle Eastern, and Moslem. Rudy Maxa, the travel expert in residence on the public radio program Marketplace, makes a pretty good case, no?

"... captured al Qaeda documents show that Arab men are probing for weaknesses in U.S. security. So, is secondary profiling at airports a civil rights violation? I say no. Not if done efficiently and with respect and courtesy. Political correctness mustn't get in the way of security."

Does it make good sense in terms of security?

BS: This is a very subtle and sensitive issue, and I spend a lot of time on it in Beyond Fear. You have to separate out the security and non-security issues. As a security measure, profiling only works as a security measure if you get the profile right, and makes things worse if you don't. Think again about the rabbit in the field. When he sees a fox, he's profiling. He doesn't know that the fox wants to eat him. He doesn't know that it isn't a good fox, a kind fox, a friendly fox. Of course this is silly, because all foxes are the same and the rabbit is smart to profile.

But as soon as you profile, you create an avenue for the attacker to gain an advantage. Any attacker that doesn't meet the profile is going to have an easier time getting through whatever security system you have in place. Think of a wolf in sheep's clothing.

It's no different with people. If we knew that Arab men, and only Arab men, were terrorists, then profiling would make security sense. But we don't. Timothy McVeigh was an American. So was William Krar, arrested Texas last April with automatic weapons, pipe bombs, and at least one cyanide bomb: an actual chemical weapon. Jose Padilla, the alleged dirty bomber, was Hispanic. Shoe bomber Richard Reid was born British, and is half Jamaican. Terrorists are European, Asian, and African. They're male and female, young and old. If we profile based on ethnicity, the terrorists are just going to pick operatives who don't fit the profile.

Remember the two Russian planes that were blown up in August 2004? The suicide bombers were women, presumably because Russian airport security does not search women.

Racial profiling has other non-security problems. There's an enormous social cost to it, because it stigmatizes a particular group of people. And since these are the very people we want on our side, helping to prevent terrorism, that probably has security ramifications as well.

This isn't to say that all profiling is bad. There is such a thing as smart profiling. It's not based on race or gender or ethnicity; it's based on intuition. When you see a man on the street running at you with a bloody meat cleaver, you're going to react. You might run, or hide, or get ready to fight. That's profiling. You don't know he has ill intentions. He might be a butcher, running after a customer who left something in his shop.

That kind of profiling works. Trained guards watching the crowd, looking for suspicious people or actions, works. I have long believed that you could get rid of the metal detectors and X-ray machines and baggage scanners at airports—all of them—and replace them with smart guards walking through the crowds and paying attention, and we'd all be more secure.

CKG: Of course, you'd have to trust those guards not to be in league with "the bad guys."

BS: We security people call that the "insider problem," and it's a huge one. Insiders, the people you must trust in order to implement security, are in the best position to subvert it. In the American West, most of the big train robberies involved an insider. Today, all the big bank thefts have involved an insider. The U.S. military's largest losses of classified information were due to insiders selling things to the Soviets. More theft from retail stores results from employees than shoplifters.

Insiders are largely opportunists, which is why it's a larger problem where financial gain is concerned than with something like terrorism. Even so, insiders do pose a terrorist risk; it's possible that a sympathetic airport security guard could aid a terrorist plot.

And while you certainly have to worry about insiders turning bad, you also have to worry about them being incompetent, or untrained, or even just tired. It's not uncommon for an attacker to exploit insiders: tricking a secretary into divulging an important password, for example. Or tricking a guard to gain access to a building. And the last thing you want is a guard who is supposed to be looking for suspicious people at an airport to stop people who are Arab, black, etc. You want them to profile based on smart criteria, not dumb ones.

CKG: German media is talking about how the terrorists at the school in Beslan, Russia, were, in large part, aided and abetted by insiders in the police and security forces who may have been corrupt; the motivation there may not even have been so much sympathy with the aims of the terrorists, but rather money and food items. They hadn't been paid a living wage in ages.

BS: Exactly. The insiders weren't the terrorists; they were dupes of the terrorists. They probably had no idea of the full extent of the plot.

And the example points to an obvious countermeasure: pay your guards well. It not only makes them harder to bribe, it attracts a higher quality of employee. That was one of the motivations behind moving airport security from a low-bidder contractor to a government agency. We could debate how well it's worked out in the end, but it was a good idea.

CKG: What are some other security measures, talking as we are about security against terrorism, that make sense in your view?

BS: Let's look at the five steps. What are we trying to protect, and what are the risks? Terrorism is not a crime against people or property. It's a crime against the mind. It's a tactic designed to evoke a certain reaction. That's the goal of a terrorist, and he has different tactics he can employ against many possible targets.

With that in mind, it becomes clear that protecting the targets is not a viable security measure. If we defend all of our airplanes, and the terrorists switch to shopping malls, have we really gained anything? There are just too many times in our society where hundreds of people gather together in a small place—movie theaters, stadiums, restaurants, buses, schools, amusement parks, the part of the airport that isn't behind the security barrier—for us to possibly defend them all.

This doesn't mean that all such security measures are useless. It makes sense to protect airplanes, because even a small security failure means the death of everyone aboard. It makes sense to protect some high-profile targets like national monuments, political conventions, and the Olympics, because of their symbolic nature. But it does mean that there are limits to the effectiveness of these sorts of countermeasures, and they rapidly reach the point of diminishing returns. A security measure that merely forces the attacker to slightly modify his tactics is not very cost-effective.

Here's a good example. The Greek government spent $1.5 billion on security for the Olympics, primarily to prevent terrorism. That money is now gone. It's spent. It bought security for a short period of time, instead of long-term security. Because it focused on a particular possible terrorist tactic instead of the terrorists themselves, it only had limited value.

In general, sensible security involves going after the terrorists. Intelligence: knowing what the terrorists are planning, whatever they're planning. Arrest and, at times, assassination: disrupting their plans, whatever they are. Interdicting funding: making it harder for them to operate, whatever they're doing. These security measures are less sexy than scuba divers patrolling the waters and blimps patrolling the air, but they're more effective.

Another area where security makes sense is emergency response. We need to be ready to respond, no matter what the terrorists do. This means funding local first responders: communications, detection equipment, HAZMAT suits, that sort of thing. Again, instead of focusing on a particular tactic, this focuses on our ability to respond.

CKG: You mentioned "knowing what the terrorists are planning" as one of the sensible security measures. The events of 9/11 showed that we didn't know, or didn't know enough, or didn't know how to connect the dots. Was the security failure that resulted from that preventable—say, by reorganizing the bureaucracy—or will we have to accept the fact that this is part of the makeup of this very complex system, that we can't know for sure if we are interpreting the data the correct way, and that catastrophic failures sometimes result, no matter what?

BS: There's a lot in that question, and I want to take the issues one at a time.

First, risks can never be brought down to zero. This has nothing to do with people, or technology, or quality of security. It's not a matter of being able to connect the dots, or having the right bureaucracy, or making sure the right defenses are in place. Life itself is risk. As you said, sometimes the rabbit gets eaten.

Also, there is risk inherent in a free society. Fundamentally, the possibility of crime is the price of liberty. Think about it; people who are free are also free to do bad things. If we decide to arrest them before they do bad things, we become a society that arrests people for thought crimes. Similarly, draconian prevention goes against basic concepts of liberty. We could all be safer if we arrested everyone, including ourselves. That's absolutely ridiculous, of course, and only demonstrates that we will gladly eschew draconian security—and accept more risk—if the price in freedom is too high.

So yes, even if you do a stellar job of security, failures will sometimes result. And catastrophic failures will sometimes result. This is something that we just have to accept. Perfection is not possible.

CKG: If that's the case, is it in the best interest of security to hold those in charge accountable if failures occur? I'm thinking about what Voltaire wrote in Candide: "In this country it is good to kill an admiral from time to time, to encourage the others." Good advice?

BS: That's the second part of your previous question. Just because perfect security is not possible doesn't mean that actual security can't be better or worse. Sometimes security systems fail because, well, just because. But more often security systems fail because they were designed badly.

And designing security systems well isn't easy. In Beyond Fear, I examine all sorts of different security strategies, and try to figure out what works best in different circumstances.

Turning to the specific incident you brought up, there were a lot of failures associated with 9/11. The primary failure was an intelligence failure. We didn't have good intelligence on al Qaeda and their plans, and there wasn't a good conduit for the information we had to reach the decision-makers. There were far too many turf battles—people protecting their information, their people, their budgets—and not enough sharing. There was a tendency for management to ignore differing opinions of those below them. And even higher in the management chain, counterterrorism was not a priority for the Bush administration before 9/11.

It's impossible to know whether an improved intelligence organization would have been able to "connect the dots." The problem is that the dots are only obvious after the fact. With the benefit of hindsight, it's easy to draw lines from people in flight school here, to secret meetings in foreign countries there, over to interesting tips from foreign governments, and then to INS records. Before 9/11 it wasn't so easy. There are millions of potential dots that could indicate thousands of potential plots, and the hard part is figuring out which to investigate and which to discard as noise. Even the best intelligence organizations are going to fail sometimes.

Aside from failures in intelligence, there were also problems in reaction. The minutes after the hijacking were a chaotic mess, where no one was in charge and no one knew what to do. Emergency response on the ground in New York was chaotic. Firemen and policemen couldn't communicate with each other, nor in some cases amongst themselves. I'm hard pressed to castigate these failures, though, because the attack was so unexpected.

I really don't see any other failures. Airport security didn't fail, because the terrorists didn't require any weapons. Airplane security didn't fail—oh, wait, yes it did. The airplane cockpit doors should have been reinforced. But we've known this was a problem for decades and—until 9/11—the airlines were still fighting the government over it.

In any case, I think that we do need to put more effort into intelligence: connecting the dots. This involves both analysts in Washington and people on the ground in the Middle East. It involves eavesdropping and language translation and intelligence gathering and everything else. It involves the intelligence community paying more attention to what's happening, and government paying more attention to what the intelligence community is saying—even if what they're saying doesn't match the government's political objectives.

I am less impressed with solutions that involve reorganizing bureaucracy. I opposed the formation of a Department of Homeland Security, and I still think it was a bad idea. Consolidating security functions increases the likelihood that we will miss something. I am likewise unimpressed with a bureaucratic reorganization of our intelligence community. If we get communication and coordination right, the bureaucratic organization is irrelevant. And if we get it wrong, no bureaucratic reorganization can possibly help.

CKG: If generals are always fighting the last war, are we fighting last year's terrorist threat? Can you talk about some things you imagine we might be faced with particularly because of actions taken today that may have consequences that have not been appropriately appreciated?

BS: If you look around at our nation's security, it's obvious that we're defending against last year's terrorist threat. The 9/11 terrorists used small knives to take over airplanes, so we ban small knives on airplanes. Never mind that passengers would never allow such an attack to happen again. The 9/11 terrorists bought one-way tickets, so we search people who buy one-way tickets more thoroughly. Some of the 9/11 terrorists entered the country on student visas, so we now scrutinize student visas more thoroughly. The 9/11 terrorists were young Arab males, so we're suspicious of young Arab males. Before 9/11, the threat was Timothy McVeigh—the radical right in America—then suddenly the threat was young Arab males. It's amazing how easily distractible we are.

That was my primary complaint with the 9/11 commission report. They correctly stated that a failure that led to 9/11 was a failure of imagination, and yet they exhibited the same failure in their report. Their report explained how to prevent a repeat from 9/11 from ever happening again, but didn't talk enough about preventing whatever the terrorists are planning next—which almost certainly is not a repeat of 9/11.

Now there is some sense to this. Most criminals are copycats. If someone invents a certain kind of crime—breaking into an ATM, for example—the criminals will do it again and again until the ATM is fixed so it's no longer possible. In 1971, someone named Dan Cooper invented a new way to escape from a hijacked airplane: jumping off the rear stairway with a parachute. After his story made the news, three different criminals tried the same trick. Eventually, Boeing changed the design of the 727 to prevent the rear stairway from being lowered during flight.

At the same time, al Qaeda has shown itself to be very inventive. They never do the same thing twice; they always think of something new. This is why I advocate spending money on security measures that go after terrorists and terrorist plots, regardless of who or what they are, and on emergency response: so we'll be better prepared, no matter what the terrorists do. Our biggest failure is always going to be a failure of imagination, so we need to build security that minimizes the effects of that failure.

CKG: Is it possible that al Qaeda and similar organizations can launch virtual attacks, presenting us with something of the equivalent of a cyber 9/11?

BS: Not for a long time. These attacks are very difficult to execute. The software systems controlling our nation's infrastructure are filled with vulnerabilities, but they're generally not the kinds of vulnerabilities that cause catastrophic disruptions. The systems are designed to limit the damage that occurs from errors and accidents. They have manual overrides. These systems have been proven to work; they've experienced disruptions caused by accident and natural disaster. We've been through blackouts, telephone switch failures, and disruptions of air traffic control computers. The results might be annoying, and engineers might spend days or weeks scrambling, but it doesn't spread terror; the effect on the general population has been minimal.

The worry is that a terrorist would cause a problem more serious than a natural disaster, but this kind of thing is surprisingly hard to do. Worms and viruses have caused all sorts of network disruptions, but it's happened by accident. In January 2003, the SQL Slammer worm disrupted 13,000 ATMs on the Bank of America's network. But before it happened, you couldn't have found a security expert who understood that those systems had that vulnerability. We simply don't understand the interactions well enough to predict which kinds of attacks can cause catastrophic results, and terrorist organizations don't have that sort of knowledge either—even if they try to hire experts.

The closest example we have of this kind of thing comes from Australia in 2000. Vitek Boden broke into the computer network of a sewage treatment plant along Australia's Sunshine Coast. Over the course of two months, he used insider knowledge to leak hundreds of thousands of gallons of putrid sludge into nearby rivers and parks. Among the results were black creek water, dead marine life, and a stench so unbearable that residents complained. This is the only known case of someone successfully hacking a digital control system with the intent of causing environmental harm.

There are many possible Internet attacks, some of them affecting tens of thousands of computers. But they're not terrorism. We know what terrorism is. It's someone blowing himself up in a crowded restaurant, or flying an airplane into a skyscraper. It's not infecting computers with viruses, forcing air traffic controllers to route planes manually, or shutting down a pager network for a day. That spreads annoyance and irritation, not terror.

This is a difficult message for some, because these days anyone who causes widespread damage is being given the label "terrorist." But imagine for a minute the leadership of al Qaeda sitting in a cave somewhere, plotting the next move in their jihad against the United States. One of the leaders jumps up and exclaims: "I have an idea! We'll disable their e-mail. . . ." My guess is that all the other terrorists will laugh at him. Conventional terrorism— driving a truckload of explosives into a nuclear power plant, for example—is easier, and much more effective.

CKG: Coming back to something you said above: "So while I see a lot of irrational national security decisions made by government, I can understand these decisions by figuring out the real motivations behind them." Can you give examples of some of these real motivations?

BS: Remember the dressing room example? A department store could reduce the risk of shoplifting by installing cameras in dressing rooms, but the resultant customer outrage—and loss of business—would not be worth it. A security expert might look at this situation and say something like "the department store is behaving irrationally," but that's only because the security expert didn't understand that the primary motivation for the department store is sales, not security. The store is happy to accept worse security if that results in larger sales.

The Olympic story also illustrates this point. As a security expert, I know that the $1.5 billion would have been more effectively spent on counterterrorism in general rather than counterterrorism at the Olympics— —in specific. But I wasn't the one with the money to spend. The money was spent by the Greek government and the Olympic organizers. To them, the most important thing was not to reduce the risk of terrorism overall, across the globe, but to reduce the risk of terrorist attacks during the two weeks of the 2004 Summer Olympics in Athens. Once you understand their real motivation, the ridiculous amount spent on security makes more sense.

This is not meant to be sinister. People and organizations are going to go through the five-step process from their own vantage point. They'll have their own ideas as to what's important, and their own personal notions of what trade-offs are worth it.

And trade-offs are cheaper if they're "spent" by someone else.

CKG: And policies' costs are less noticeable when they are distributed widely among taxpayers and the general public.

BS: Of course. It's far easier to spend a couple hundred billion invading and occupying Iraq if you don't force every citizen to write a personal check. Because if people had to write that check, they would think much more carefully about whether the security they were buying was worth it, or if there was a smarter way to spend that money.

You see this kind of attitude everywhere in security: costs that are, in economic terms, external, are much easier to swallow. The U.S. Department of Justice is happy to champion extreme security measures, because the loss of freedoms and civil liberties are "spent" by the American people; they're not as big a trade-off to the police. If someone installs a home alarm system and the burglar goes next door instead, that's a perfectly reasonable trade-off for the homeowner. But for the township, that's a waste of money. Remember when I asked what the value was if we defended our aircraft and the terrorists moved to shopping malls? Well, if you're an airline there's enormous value. If the airlines are making the trade-off decision, they're going to spend a lot of money defending airplanes. If the nation is making the decision, a more general security approach would make more sense.

Earlier I talked about the DC sniper, and how the extreme reactions were not warranted by the small increase in risk. But imagine that you're the principal of a high school. Even if you know that the risk that a sniper will down someone at the football game is remote, you also know that you will probably lose your job if it happens. So when you look at the risks and trade-offs, it is a perfectly reasonable decision to institute extreme security measures against a very minor risk.

In general, the person who gets to make the security decision is going to make one that is best for him. And most of the time, security considerations are a small part of that decision, and the non-security parts matter more.

CKG: I want to connect that with something you wrote in your newsletter "Crypto-Gram." You described security as "something that is done to us," and you suggest that power lies in the aggregate, in organizing ourselves. Isn't our government that very organization we have created for the purpose of security of the individual?

BS: One issue at a time. Look at the examples above. The department store makes security trade-off decisions for its customers. The Olympic organizers, and the host country, made security decisions for the athletes and spectators. The Department of Justice makes security decisions for Americans. The school principal made the security decision to cancel the football game for his students.

This happens everywhere. Banks and credit card companies make security decisions for their customers. Microsoft makes operating system security decisions for its users. Cell phone companies decide what security measures to build into phones. Large database companies decide how much to spend on the security of your data, even though it's your data. When you fly on an airplane, all security decisions are made for you in advance. You can't fly on Less-Secure Airways: "We get you there faster, with less standing in line for security." You can't fly on More-Secure Airways: "We run background checks on everybody." You get the level of security that the government, and the airline industry, decides you get.

For the most part, people have very little control over the security in their lives.

The problem arises when you combine this with the fact that the person or company making the security trade-off is likely to make one that is best for him. Large database companies are not going to spend as much protecting your data as you would, because they don't bear the brunt of the losses if the data is stolen. Cell phone companies are going to spend more money stopping someone from making free calls than they will preventing eavesdropping, because the first problem affects their bottom line much more than the second. And airlines are going to spend more money defending against airplane terrorism than makes sense, because they'd go bankrupt if people were afraid to fly.

In Beyond Fear, I call these non-security factors "agenda." I don't mean that pejoratively. Every person and organization has its own agenda, and is going to make security trade-offs based on it.

Government, too, has its own agenda, separate from that of its constituents: an inherent conflict of interest. Politicians represent the people, but they are also concerned about their careers. This means that they are more likely to do things that will get them re-elected, and less likely to do things that are less likely to get them re-elected, even if those latter things are the correct things.

Politicians know that they need to be perceived as strong leaders in the face of danger. They know that they need to be seen as doing something. They know that they're better off doing more than necessary, than for the worst to occur after they've done less than they could have.

Money adds another conflict of interest. Because U.S. politicians need so much money to stay elected, and because they get that money largely from special interest groups, they are more likely to take the wants of those special interest groups into account. This is why, for example, the airline industry was able to prevent the FAA from forcing them to reinforce cockpit doors for so long. Or why, in the months when airport security was confiscating everything from corkscrews to knitting needles, matches and lighters— actual combustible materials—were never confiscated.

CKG: Let me guess, the tobacco industry lobbied and argued passengers needed matches and lighters to light up as soon as they got off the plane and to a smoking area?

BS: Exactly. The tobacco lobby got to Congress. But look what happened. Regardless of whether banning matches and lighters was a good idea—that's a completely separate issue—the government made the security decision based on completely non-security reasons: based on its agenda. We trust government to make security trade-offs for us, with our agenda in mind, but instead they make trade-offs with their agenda in mind. They make the trade-offs that are beneficial to them.

The effects of all this are that government is more likely to overreact with security measures that affect individuals—in expense, loss of convenience, and civil liberties reductions—than with security measures that affect favored industries. Which is about what we're seeing.

Well, the solution should be obvious. If both corporations and governments are going to make security trade-offs based on their agenda— -and again I want to stress that I don't think this is necessarily a bad thing, but instead a reality that we simply have to accept and deal with—then we need to lean how to influence their agenda. In the corporate world, we can do this through economics: our purchasing decisions. In the political world we can do this by voting. And in both areas, we do better if we're organized than if we're not.

If people refused to use cell phones without voice privacy, then the phone companies would offer such a service. Or if people demanded that the government enact strong privacy laws, then database companies would increase their security. In the months after 9/11, the government wanted to ban laptop computers from airline carry-on luggage. They relented because the airlines believed they would lose their high-paying business travelers. That's an example of people using their economic muscle—or, at least, airlines worried about people's economic muscle using their political muscle—to affect security on a national scale.

The moral is that while we don't have much direct control over most of the security in our lives, we do have substantial indirect control—over the agenda of those that have the direct control. The trick is to use our economic and political muscle wisely.

CKG: That presupposes that we can make decisions about what we want based on input that is valid and reliable and that playing field for the players in the security game is level. But we know that well-connected and well-organized vested interests inside and outside of government drive policy decisions. Upward of eighty percent of the Spanish population didn't want involvement in the Iraq war. But Spain participated anyhow.

BS: I never said this was easy. You're right, in order to make good security decisions people need to understand the trade-offs and what works and what doesn't. But that's no different from any other aspect of public policy. For people to make good decisions about healthcare, education, infrastructure expenditures, Social Security, and everything else, they need to be well-informed. If people are not well-informed, then politicians are going to make decisions based on their own agenda.

But sometimes the system works. The vast majority of Spanish citizens were against the Iraqi war, so when election time rolled around the government was voted out of office. People were too quick to point to the Madrid train bombings as the reason why the election turned out the way it did. They're wrong; the government would have been voted out of office even without the terrorist attacks.

CKG: In 1961, President Eisenhower warned of against the overreaches of a military-industrial complex that grew powerful as a result of the security threat from Communism. Now, the war on terrorism has generated a fairly good-sized and growing security sector. Homeland Security secretary Tom Ridge spoke highly of this new public—private partnership: "We look to American creativity to help solve our problems and to help make a profit in the process." Does Eisenhower's warning apply here as well?

BS: It gets back to agenda. What Eisenhower was warning about was a group of organizations—both military organizations and corporations—that relied on the Cold War for its justification. Military budgets soared during the Cold War, and that resulted in both some very powerful people inside the military and some very wealthy corporations supplying the military. Because the power of these groups was directly dependent on the threat of Communism, it became in the best interest of those groups for the nation to be fearful of Communism. Their agenda included exaggerating the threat, and ensuring that the public never forgot the threat.

There is every indication that the same thing is happening today with the threat of terrorism. Since 9/11, the Republican party has run on the "we can keep you safe" platform. They spent decades claiming that the Democrats are soft on Communism, and they've discovered that they can claim the Democrats to be soft on terrorism. Billions of dollars are being spent in government contracts: both homeland security systems like systems to track foreign visitors into the country and foreign expenditures like the war and continued occupation of Iraq. The military is getting and spending money. The Department of Justice is getting and spending money. So is the Department of Homeland security. The military-industrial complex is again becoming powerful, fueled by a new fear of a new enemy.

And this power brings with itself a new agenda. It is in the agenda of all these groups—politicians, the military, the corporations providing national security services—that all of us believe that terrorism is a major threat. As long as we're fearful and rely on these groups to keep us safe, they benefit. When we move beyond fear and start thinking rationally, these groups lose power.

In Beyond Fear, I spent an entire chapter on the terrorist threat. It's important not to minimize it, but it's equally important not to exaggerate it. And it's vital to understand it. The more we do that, and the more we move beyond fear and start making sensible security trade-offs, the more we limit the power of this new military-industrial complex. We need the military. We need the Department of Justice, and we need the anti-terrorism capabilities of the Department of Homeland Security. We even need security companies. But we also need to be the ones making the trade-offs, with our agenda. The more we do that, the safer we will all be.