Diminishing Returns

Edward H. Baker Avatar

Updated on:

Dealing with security is like trying to figure out the Year 2000 problem—only there’s no deadline, just continued uncertainty. No one can say with great confidence just how vulnerable the Internet and the networks hooked up to it are to a massive attack. Some incredibly effective worms have gone across the Internet, and what has spared us from their worst effects has been that they have no payload. The burden to the Net is merely the congestion caused by the very transmission of the worm. These missiles are landing, and all they do is cause more missiles to launch, but never is there a boom. It makes you a little worried. What if somebody decides to put a payload on one of these things?

Aside from security against the disruption of the network or its machines, there’s security against compromise of proprietary information. The usual steps to protect against such disclosure entail varying degrees of paranoia—rules proscribing who at the firm can do and see what; technical barriers to viewing, modifying or printing information; and extensive audit trails of who saw what when. But I don’t know how much adopting some of the techniques of the government, the military—trying to cast one’s systems as classified information—helps in the long run. We’ve gone to a more open technological environment, one in which it’s harder to keep secrets. And that raises really interesting questions about how corporations should deal with security. They are struggling between a model where the answer is to tighten everything up—if the threat is greater, the lock has to be stronger—and a model where they actually try to embrace this open environment, saying, “If this is the kind of environment we live in, maybe we should be reworking the way we are structured to credit openness rather than secrecy as crucial to our success.”

The question is really about what risk-reward ratio makes sense for a given set of data, not how tightly you can turn the screws toward maximum security. Every CIO wants to be prepared for that terrible rainy day when everything’s offline, compromised, hacked. But there is a limit. At some point you get diminishing returns trying to account for every single contingency ahead of time. And when there is a crisis, it is often people’s resourcefulness once the crisis is unfolding that matters, not the number of contingencies that had been put into place beforehand.

How can you tell if you’re overdoing security? There are telltale signs. When employees can’t get to the stuff they need because they’re not allowed to dial in from home or the road. Or there’s data they’re not authorized to access even though they need to see it. That’s the balancing act CIOs have to adopt. They don’t want to clog everything up in the name of security, but they also can’t just be laissez-faire, and then when that rainy day comes, say, “Gee, I’m sorry.” If you find yourself saying no to legitimate request after legitimate request for information, or you’re setting up barriers to the use of the data out of some notion of security and the company can’t get its job done anymore, you’ve got a problem.

As economists would put it, there is a sufficient level of security that involves identifying areas in which you really do want to balance the costs and benefits against those for which you can simply brook no compromise. If you’re a hospital with sensitive patient data, it’s a lot tougher to say, “Well, we just ran the numbers and it wasn’t worth it to batten down the hatches a little tighter.”

But for many applications, even if they’re mission critical, security is legitimately simply part of the business case. That’s where foregrounding the risk-reward balancing act to the business units can be very helpful. For CIOs, it might not be bad to treat the business unit as the customer and to ask business unit heads exactly what risk-reward ratio they want. “You just tell us, we’ll set the dials. Tell us where to place the guards, and if you don’t want any, hey, you’re the boss. Take your chances.”