Secure By Design

By David Raikow  |  Posted 08-20-2008

Secure By Design

Imagine that you are observing people construct a new building. They are not looking for anything elaborate, and as long as they can get a roof over their heads quickly, conveniently and inexpensively, they are prepared to accept flaws and imperfections. So they dive right in, with a rough sketch of a blueprint, a minimum of planning, and very little assessment of local climate, geology or traffic patterns.

Instead, they focus their efforts and money on selecting and acquiring good, solid building materials. Construction itself is something of an afterthought: Individual pieces of the structure may be competently assembled, but little thought is given to how they fit together.

How much would you be willing to risk on such a structure?

As absurd as this scenario sounds, it bears a striking resemblance to common approaches to corporate information security. Security generally creates no additional revenue and is often viewed as disrupting efficient, productive business operations. In addition, security encompasses a number of highly complex technical issues that are understood by relatively few individuals.

As a result, in spite of decades of warnings from security experts, enterprise decision-makers believe that they can address these issues simply by identifying the right combination of hardware and software products--a temptation that many security vendors work hard to reinforce.

Without a realistic, well-implemented security policy, no firewall is going to do all that much. Practical, effective security doesn't come from a particular product, any more than good architecture comes from a particular brick supplier. And just as there is no one "correct" blueprint, there is no single collection of security strategies or techniques that will work for every business. A good process, however, will enable you to develop a security policy that will meet the needs of your enterprise.

Assessing the Threat

Assessing the Threat

The first step in creating a security policy is setting clear priorities, which in turn requires an effective risk assessment. Every risk assessment boils down to two basic questions: How likely are different types of security failures? How much could each of those failures cost?

Experienced security practitioners often develop a relatively reliable "gut sense" of the answers to these questions, but coming up with comprehensive, verifiable and quantifiable answers is far from straight-forward. Security failures occur when systems behave in unexpected ways, and attackers constantly search for new ways to force them to do so.

Failures also arise out of a confluence of events--some or all of which may be difficult to predict or even detect on their own. Damage is similarly difficult to foresee, particularly when it includes factors such as legal costs, remediation and damage to public image.

The key to an effective risk assessment is adopting a relatively flexible process that pushes decision-makers to discuss and seriously consider the relevant issues without getting bogged down in excessive quantitative detail. Online payment service PayPal, for example, has relied on a relatively informal risk assessment process when making significant changes in its security policy. "You generally have a fairly good idea of where the threat areas are for you and where you need to beef up your standards," says PayPal Chief Information Security Officer Michael Barrett.

Security advisor and firewall pioneer Marcus Ranum argues that formal, quantitative risk assessment is largely a waste of time and money. Because assessments are based primarily on conjecture and estimates, they are mostly used to justify the gut-sense recommendations and/or decisions made before the assessments began.

"Risk assessments are usually just a technique for bullshitting clueless management," says Ranum, chief security officer for Tenable Security. "You're multiplying wild-ass guesses by wild-ass guesses, and the results are going to be wild guesses. Really what that is, is shorthand for saying the organization needs to sit down and have a realistic discussion about what can go wrong."

On the other hand, risk assessments involve complexities that can call for a degree of specialized expertise. Security technologist and author Bruce Schneier, the chief security technology officer of BT, argues that businesses should outsource at least a significant part of the process to an outside contractor.

"These things are hard, they're complicated and they're subtle," Schneier says. "The interactions are weird and not what you'd think. The best thing to do, in most cases, is to find someone who's an expert to do it for you."

Factors other than the inherent complexity of the task often cause risk assessments to bog down in excessive detail. Outside contractors, for example, have an incentive--whether acted upon or not--to produce exhaustive and detailed deliverables in order to justify their fees. Various contributors to the process also seek to "bulk up" the final report in an effort to avoid blame in the event of a serious security incident.

"A lot of [the reason] why people do this is CYA," Schneier says, "so they can say, 'The reason we did this is because the numbers said we should. Therefore, don't sue us.'"

Another common error in risk assessment is to focus excessively on hardware and software to the exclusion of risks that stem from user behavior. Attackers, after all, don't need to steal and decipher password files if they can trick users into giving up their passwords over the phone; they don't have to break into a network server to steal a database if they can just extract it from a stolen laptop or storage device. These risks often tend to get short shrift in most aspects of security practice; inadequate attention to them at the beginning of the process reinforces this impulse.

Creating the Policy

Creating the Policy

The next step in creating a security policy is to build the policy itself, based on the threat priorities clarified or illuminated by the risk assessment. In essence, this is a matter of identifying the various costs associated with mitigating each of those threats, and selecting a set of strategies for doing so that is appropriate to relevant business needs and budget limitations.

There's no getting around the fact that securing information and IT resources imposes costs on an organization. The costs of hardware and software tools are the most obvious--and can seem the most daunting to a budget-conscious IT practitioner--but they are often the easiest to address. Assessing costs and benefits to fit expenditures within a defined budget, is, after all, what executives are paid to do.

The intangible costs to the user and organization--in efficiency, productivity, morale and training--prove much more difficult to get a handle on. Adding functionality or interoperability almost inevitably adds new vulnerabilities and avenues for attack, and securing those functions usually places burdens on the users.

Allowing users to send and receive e-mail attachments exposes them to attachment-borne malware, but blocking attachments disrupts workflow and can impede cooperation with other organizations. Giving users remote access to the corporate network creates opportunities for attackers to intercept sensitive network traffic or masquerade as legitimate users, but blocking access severely limits employee flexibility.

Because of these burdens imposed on users, a good security policy takes user behavior and responses into account. Users are neither static nor easily controllable. They are intensely aware of their working environment and, when faced with a perceived obstacle, will look for ways to go over, around or through it.

When security measures appear to present such an obstacle, user responses can easily create a greater problem than the one those measures were intended to address. Education, moreover, will not always change user behavior.

Therefore, when creating a security policy, planners should take careful note of the measures that require user cooperation, those that do not, and the ones that fall somewhere in between. For instance, user behavior has essentially no impact on server patching, while password and VPN usage place a great deal of power in the user's hands.

A surprising number of strategies fall into the middle. For instance, companies don't need user cooperation to block instant messenger clients at the perimeter or to filter Web content, but those actions can prompt users to turn to Web-based chat alternatives or anonymous Web proxies as workarounds. This renders the blocking and/or filtering ineffective, and can introduce new threats by luring users to insecure Web sites.

Planners must decide how much they are willing to rely on user cooperation for an organization's security. The more faith they place in the user, the more flexibility and functionality they can provide, but the more precarious their IT defenses will be.

Tenable Security's Ranum argues that by default, users should have no access to any IT resources, with exceptions made for the functionality they must have to do their jobs. "You have to assume that employees will do exactly the things you don't want them to do, because sooner or later some of them will," he says.

On the other hand, PayPal's Barrett argues that the IT security staff's role is to provide users with safe access to as much functionality as is feasible--not to block them as much as possible. "Cars don't have brakes to make them slower, but to allow them to be driven faster safely," he says.

This can prove to be an extremely difficult decision: Both extremes of this trust continuum present clear costs and benefits, but the various points in the middle--where most organizations will inevitably find themselves--present much more complex and murky tradeoffs, with often counterintuitive results. The bottom line is that there is no "right" answer for everyone. Each enterprise has to assess its own needs, its own staff and its own internal work processes to find the best balance.

Winning User Cooperation

Winning User Cooperation

Where user cooperation is necessary, even trusting and permissive planners cannot assume that well-meaning users will do as they are told. Planners should take workflows and user psychology into account and do everything possible to create an environment in which cooperation is the user's easiest and most attractive option.

Take the classic dilemma of user authentication policies. Users don't like to authenticate and will take any opportunity to simplify or avoid the process. If you force users to remember complex passwords, they respond by writing them down, trading one vulnerability for another. For years, security professionals have tried to counter with user education, only to find that users--even the IT staff--simply would not cooperate.

Addressing this behavior is the real benefit of non-password authentication technologies like smart cards and biometrics, which sidestep the impulse to simplify or write down passwords. However, in practice, users generally undermine these processes as well by refusing to log out unless forced to, leaving their smart card or token plugged into their computers when they leave.

At least one large software developer tackled this problem by tying user authentication and facility door locks to the same smart card. If employees want to use the restroom, they must take their smart card with them to unlock the restroom door, thereby logging them off the network at the same time.

Every security policy should also include provisions for managing the decision-making process for future policy changes. Effective implementation over time will always require temporary exceptions and long-term modifications in response to unforeseen circumstances, new business opportunities and technological developments.

PayPal, for instance, has a set of well-defined procedures by which departments can request exemptions from specific rules, complete with appeal mechanisms. Without such procedures in place, the resulting ad hoc process is at least as likely to be governed by interpersonal dynamics and office politics as by objective criteria.

Prepackaged, customizable security policies and standards libraries available from a number of vendors can be a useful starting point for many enterprises. This is particularly true for organizations regulated by legal or industry security standards that impose IT security and/or auditing requirements, including those for the Health Insurance Portability and Accountability Act, Gramm-Leach-Bliley Act, Sarbanes-Oxley Act, the Payment Card Industry Data Security Standard and numerous state laws.

But prepackaged products should be viewed only as a starting point--one that will require extensive, labor-intensive customization before it will be appropriate for any given organization. PayPal, like many financial services enterprises, uses a prepackaged library, but modified it heavily before implementation.

Implementing the Policy

Implementing the Policy

The final step is to put the security policy into action. Implementation conforms to a variety of different schedules, done in different chronological orders and divided into different stages or phases for rollout. In any case, it consists of two primary elements.

The first element consists of educating users and initiating appropriate changes in corporate culture. The second includes technical installation and configuration. This element might be further divided into changes to the hardware and software with which users directly interface (their own computers) and those with which they do not interact (servers, perimeter defenses). It is only at this point that the selection and configuration of particular security products becomes significant, once they can serve as effective tools for the implementation of existing policy.

"The biggest problem with policy is that people make it too complicated and don't write it in human language," says Emman Ho, vice president of IT Services at A&E Television Networks. "You can make a policy like a phone book, and I guarantee that no one will read every line."

A&E's solution is to target their security policy document at the end user, keeping it short--say, three pages--and emphasizing nontechnical language. Other enterprises find this balance by drafting a number of different documents intended for different audiences.

PayPal uses three separate documents: a policy document that outlines high-level overarching goals and priorities in a short format, a standards document that goes into more detail about rules and expectations, and a procedures document that spells out specific details.

Another key factor to keep in mind throughout the implementation process is the speed with which a particular organization can absorb and internalize change. Just as it would be foolhardy to expect to simultaneously shift every user within an enterprise to new hardware or software, it is counterproductive to attempt to force major changes in organizational and user behavior instantaneously. This should be a gradual process, ideally beginning with the development of some consensus and buy-in early in the planning stages.

Back to CIO Insight