Assessing the Threat
The first step in creating a security policy is setting clear priorities, which in turn requires an effective risk assessment. Every risk assessment boils down to two basic questions: How likely are different types of security failures? How much could each of those failures cost?
Experienced security practitioners often develop a relatively reliable "gut sense" of the answers to these questions, but coming up with comprehensive, verifiable and quantifiable answers is far from straight-forward. Security failures occur when systems behave in unexpected ways, and attackers constantly search for new ways to force them to do so.
Failures also arise out of a confluence of events--some or all of which may be difficult to predict or even detect on their own. Damage is similarly difficult to foresee, particularly when it includes factors such as legal costs, remediation and damage to public image.
The key to an effective risk assessment is adopting a relatively flexible process that pushes decision-makers to discuss and seriously consider the relevant issues without getting bogged down in excessive quantitative detail. Online payment service PayPal, for example, has relied on a relatively informal risk assessment process when making significant changes in its security policy. "You generally have a fairly good idea of where the threat areas are for you and where you need to beef up your standards," says PayPal Chief Information Security Officer Michael Barrett.
Security advisor and firewall pioneer Marcus Ranum argues that formal, quantitative risk assessment is largely a waste of time and money. Because assessments are based primarily on conjecture and estimates, they are mostly used to justify the gut-sense recommendations and/or decisions made before the assessments began.
"Risk assessments are usually just a technique for bullshitting clueless management," says Ranum, chief security officer for Tenable Security. "You're multiplying wild-ass guesses by wild-ass guesses, and the results are going to be wild guesses. Really what that is, is shorthand for saying the organization needs to sit down and have a realistic discussion about what can go wrong."
On the other hand, risk assessments involve complexities that can call for a degree of specialized expertise. Security technologist and author Bruce Schneier, the chief security technology officer of BT, argues that businesses should outsource at least a significant part of the process to an outside contractor.
"These things are hard, they're complicated and they're subtle," Schneier says. "The interactions are weird and not what you'd think. The best thing to do, in most cases, is to find someone who's an expert to do it for you."
Factors other than the inherent complexity of the task often cause risk assessments to bog down in excessive detail. Outside contractors, for example, have an incentive--whether acted upon or not--to produce exhaustive and detailed deliverables in order to justify their fees. Various contributors to the process also seek to "bulk up" the final report in an effort to avoid blame in the event of a serious security incident.
"A lot of [the reason] why people do this is CYA," Schneier says, "so they can say, 'The reason we did this is because the numbers said we should. Therefore, don't sue us.'"
Another common error in risk assessment is to focus excessively on hardware and software to the exclusion of risks that stem from user behavior. Attackers, after all, don't need to steal and decipher password files if they can trick users into giving up their passwords over the phone; they don't have to break into a network server to steal a database if they can just extract it from a stolen laptop or storage device. These risks often tend to get short shrift in most aspects of security practice; inadequate attention to them at the beginning of the process reinforces this impulse.