New Security Survival Guide: How To Layer A Solid Defense

By John Moore  |  Posted 05-14-2007

New Security Survival Guide: How To Layer A Solid Defense

A New Look at Layers
While emerging classes of tools may fend off attacks at multiple layers of a security strategy, there are pitfalls if the tools are not properly configured, managed or integrated with existing systems.

Layer 1: Perimeter Security
Layer 2: Host Security
Layer 3: Identity and Access Management
Layer 4: Network Access Control
Layer 5: Vulnerability Management
Layer Integration: Pulling It All Together

QUESTION: What do you think is the biggest pitfall to implementing a layered defense? Write a letter to the editor at editors@ziffdavis.com

Next page: A New Look At Layers

A New Look At

Layers">

Security is a many-layered thing for most I.T. managers. Attacks may target network, server or application vulnerabilities. Blended threats combine multiple attack vectors Trojan horses, worms and viruses, for example in an attempt to outflank an organization's defenses.

In response, enterprises erect a series of barriers on the principle that an attack that beats one security measure won't get past other protections. This approach goes by several names: layered security, defense-in-depth and, on the folksy side, belt and suspenders. But the underlying premise is the same.

The traditional view of layered security places firewalls at the outermost ring of protection, guarding the corporate network from Internet-borne incursions. Inside the firewall, attention turns to network-based intrusion detection/intrusion prevention systems that aim to snuff out attacks that sneak through the firewall. Antivirus software and host-based intrusion detection/prevention systems protect servers and client PCs, providing still another layer.

While emerging classes of tools may fend off attacks at multiple layers, there are pitfalls if the tools are not properly configured, managed or integrated with existing systems. In effect, chief information and security officers have to be jacks of all trades to implement an effective layered security strategy.

Consider Chris Buse, the state of Minnesota's first chief information security officer. He's been on the job eight months and has spent that time rolling out a layered security strategy built around numerous preventive controls. "We need to be good at all of the areas," he says of the layers of protection. "We need to have good perimeter defenses. We need to have host- and network-based intrusion detection. But we need to have other solutions as well all the way down to the desktop level."

One hitch: Organizations can get caught in a cycle of adding layers of technology every time a new class of security products emerges, says John Pescatore, a vice president and research fellow at Gartner in Stamford, Conn. "If you keep spending on more and more layers, you start eating up more and more of the I.T. budget, leaving less money for meeting new business demands and applications," he warns.

Gartner reckons that the typical enterprise spends more than 5% of its I.T. budget on security. Pescatore says the current pace of security spending is twice that of I.T. spending overall. He pegs the growth in annual security spending at 9%, compared to 4% to 5% for I.T. overall.

Pescatore prefers to define layers in terms of critical security processes tasks such as vulnerability management and intrusion prevention. Process-based definitions like these don't commit I.T. managers to a specific technology approach and also guard against redundant technology.

For example, anti-spyware products entered the market a few years ago as a product set distinct from antivirus. But both support the same infrastructure protection process, Pescatore contends.

"What is so different about the process of blocking spyware from the process of blocking viruses?" he asks, adding that vendors such as Symantec have since consolidated anti-spyware and antivirus on the desktop.

A New Look At Layers

Others are also challenging the layered model. Bruce Gnatowski, practice manager at security consultant Cybertrust, says the perimeter, for example, has become clogged with security products designed to shore it up. One strategy in this segment aims to bolster perimeter security with fewer devices (see Layer 1: Permiter Security).

Gnatowksi identifies blurring of corporate network boundaries as another issue affecting perimeter security. This "de-perimeterization" a description coined by the Jericho Forum, a technology customer and vendor group based in San Francisco that explores cross-organizational security has caused some I.T. shops to revisit the perimeter.

Emerging technology categories such as network access control seek to address the dissolving perimeter, giving organizations greater control over the myriad devices clamoring for network resources. Network access control products check the health of computing devices attempting to enter the corporate network (see Layer 3: ID and Access Management).

There is also an increased emphasis on host security for so-called end points such as servers and PCs so that these devices can defend themselves (see Layer 2: Host Security). Those technologies include host-based intrusion protection systems.

Other security layers let approved users in. That's the realm of identity and access management systems, which provide a mechanism for authenticating users and steering them toward the network resources appropriate to their organizational roles (more on Layer 3: ID and Access Management).

With vulnerability management, I.T. managers can tap an array of software products and professional services that scour networks, servers and applications for security gaps that external attackers or malicious insiders could exploit. Concerns over perimeter security breaches and insider threats have intensified efforts to scour applications for security lapses. Penetration testing and code scanning software are two approaches in this arena (see Layer 5: Vulnerability Management).

Finally, security-minded organizations seek to pull together security layers into a unified whole. Interfaces within and among layers have begun to appear. The advent of security information and event management systems promises to cull pertinent security data from a range of systems to provide a comprehensive view of vulnerabilities and incidents (see Integrating the Layers).

"Large organizations are pushing the vendors and the technology to be much more integrated," says Jon Oltsik, senior analyst covering information security at Enterprise Strategy Group, a market research firm in Milford, Mass. "You don't need layers of security; you need areas of security with integration."

Read on to see how I.T. organizations are managing individual layers and tying them together.

Next page: Layer 1: Perimeter Security

Layer 1

: Perimeter Security">Layer 1: Perimeter Security

In organization's perimeter defense is the oldest and, some would say, the most cluttered security layer. Firewalls have kept watch for two decades at the frontier where corporate networks reach the Internet. A firewall blocks questionable network packets from reaching internal networks, denying passage based on the IP address of the packet's source or the destination service such as File Transfer Protocol the packet is attempting to reach. Intrusion detection systems followed firewalls into the fray, detecting malicious software such as worms and other attacks that would get past a firewall. Intrusion prevention systems both detect and block attacks. Also on the network border: secure messaging gateways designed to stem spam and e-mail-borne viruses.

One reaction to those mounting lines of perimeter defense: consolidation. Kansas City Life Insurance, for one, replaced traditional, single-purpose devices with a hardware-software combination called a unified threat management (UTM) appliance. The device combines the firewall typical of perimeter defenses with intrusion prevention systems, anti-spam and antivirus software, and Web filtering.

Pricing for entry-level appliances designed for small offices starts at less than $1,000, while UTM products for large enterprises cost upward of $10,000. Vendors include Astaro, Cyberoam, Crossbeam, Fortinet and Secure Computing.

In opting for Astaro's unified threat management offering, Kansas City Life was able to unplug several pieces of gear, including its Cisco Systems PIX firewalls and an Internet Security Systems intrusion detection system, says Keith Beatty, network engineer at Kansas City Life.

The Astaro product's anti-spam and Web filtering capabilities sold as optional features, according to the vendor let Kansas City Life jettison three additional security elements: GFI Software's MailEssentials anti-spam filter and MailSecurity e-mail firewall, and SurfControl's Web filtering application.

The simplification has lowered Kansas City Life's security costs by a few thousand dollars a year in reduced software licensing and support expenditures.

Kansas City Life has also scaled back its reliance on contractors. With one vendor's technology to support, the company can use in-house experts to do the job.

Still, organizations seeking the benefits of integrated perimeter security face implementation challenges with unified threat management. "One of the main issues you're going to have with UTM is the fact that you are doing so much in one box that you have to be careful about scalability," Beatty warns. "And that is where we stumbled a couple of times."

He says the appliance, although a "pretty powerful device" in his estimation, would take a performance hit during busy times of day. The product's Web filtering function, in particular, is extremely CPU-intensive, he says. The product scans for viruses on each user's Internet connection, so CPU demand mounts as the number of concurrent Web surfers rises. Kansas City Life maintains a home office staff of more than 500 and supports 1,400 agents in the field.

To balance the load, Kansas City Life shifted another CPU-intensive task spam filtering to a second appliance. That appliance is actually Astaro's software loaded onto the company's own hardware. Beatty says that smaller organizations can probably get by with one appliance. But as a best practice, midsize and larger organizations should "split the load between two boxes," he adds.

According to an Astaro spokesman, "Most of Astaro's customers run all their features on one unit that is sized correctly for their environment. In some situations, we have customers that like to run certain subscriptions on separate units." The spokesman adds that customers may use two appliances to prevent one appliance from becoming a single point of failure.

Beatty calls Web and spam filtering the two greatest consumers of CPU and memory resources: "They will definitely impact the hardware more than anything else."

Next page: Layer 2: Host Security

Layer 2

: Host Security">Layer 2: Host Security

Some I.T. departments have redrawn the perimeter around PCs and workstations deep within the firewall. One class of solutions relocates intrusion prevention systems from the technology's traditional place on the network to servers, desktops and laptops. So-called host intrusion prevention systems typically include firewall protection for the individual server or desktop computer, and may also use a combination of signature-based and anomaly detection. Signature defenses, common in antivirus solutions, detect threats based on characteristics of a particular malware variety. Anomaly-based detection flags behavior that falls outside the range of a host's normal activities.

Vendors include CA, eEye Digital Security, IBM, McAfee, SecureWave, Symantec and Third Brigade. Typically, host intrusion prevention will involve a price per agent (the protected host) and a management console fee, according to Blake Sutherland, vice president of product management at Third Brigade. The price for a server agent can range from the low hundreds of dollars to as much as $1,000. A desktop/laptop agent runs from $20 to $80 per unit. Enterprise pricing and volume discounts cause pricing to vary.

Rockford Health System, a health-care provider based in Rockford, Ill., deploys intrusion prevention at both the perimeter and host layers. The health system uses Top Layer Networks' intrusion prevention system at the perimeter and eEye Digital Security's Blink on hosts, especially Web servers.

"The key is that no one product can do it all," says Joe Granneman, chief security officer at Rockford Health. "You need to have a mixture."

Tom Moss, security practice leader at outsourcing vendor Bell ICT Solutions, says customers may opt for host intrusion prevention systems to deflect attacks perimeter defenses may miss. Bell ICT, with headquarters in Montreal, rolled out Third Brigade's host intrusion prevention system at parent Bell Canada's Western Data Center. The host-based systems "can pick up behavior changes to a server that might go unnoticed at the network layer," Moss says.

Host intrusion prevention systems may also employ so-called whitelisting as a way to head off attacks. In whitelisting, only authorized applications an office productivity suite, for example are allowed to run on a PC.

First National Bank of Bosque County, based in Valley Mills, Texas, uses this approach through SecureWave's Sanctuary. SecureWave bills Sanctuary as an end-point security product, meaning that it provides host-based security primarily for desktop computers and other network end-points.

Brent Rickels, a senior vice president at First National Bank, says SecureWave's whitelisting provides a more proactive solution than antivirus software. "We don't like sitting back and waiting for someone to fire something off at us," he says.

Host intrusion prevention systems require careful tuning to work effectively. A baseline of normal, or expected, host behavior must be established for systems that emphasize anomalous behavior detection.

To set up SecureWave on a host, for instance, the product analyzes software loaded on a PC and records each program. According to Rickels, his bank used a new PC to establish the baseline of acceptable programs. That machine, which has never been attached to a corporate network or the Internet, houses an untainted image of applications from which the whitelist is generated. The whitelist is stored in a centralized database, which manages the approved-program policy across the organization's PCs. Rickels says the task of creating the baseline PC took about a day.

But host intrusion prevention systems that employ anomaly detection or whitelisting require ongoing attention after the initial configuration. Such systems must be tweaked whenever there's an application change. Otherwise, the system may perceive that change as unusual behavior and therefore a threat.

To prevent false alarms, the host system must constantly relearn what represents normal behavior. This relearning typically requires the administrator to switch the host intrusion prevention system from block to alert mode and observe how the system reacts to the application change. Moss of Bell ICT says the administrator then makes the necessary adjustments to account for the application change and switches the system back to block mode.

"The security team will have to have a very strong affinity for any changes going on at the application level," Moss says.

Next page: Layer 3: Identity and Access Management

Layer 3

: Identity And Access Management">Layer 3: Identity and Access Management

Security isn't just about blocking intruders mechanisms for permitting access are required as well. That's where the identity and access management layer comes in. This field includes technologies that house information on user identities and credentials user names and passwords, for example that let workers utilize I.T. resources. Identity and access management products may also enforce role-based policies that permit or restrict access to specific networks, applications and data based on an employee's job function.

CA, Courion, Imprivata, Oracle and Passlogix are among the vendors in this area. Some charge per user; others charge per server.

Some I.T. departments aim to make the access task easier for users, who may need multiple passwords to sign on to different applications.

That was the case at Southwest Washington Medical Center in Vancouver, Wash. A typical employee at the 360-bed hospital uses between six and 12 applications every day. And personnel in departments such as the hospital's intensive-care unit might deal with up to 20 computer systems, says Christopher Paidhrin, security compliance officer at Affiliated Computer Services Healthcare Solutions. The medical center has outsourced its entire I.T. department to Affiliated Computer Services, with the exception of the chief information officer post.

The hospital also sought to address the access needs of mobile users 500-plus external physicians in addition to more than 3,000 on-site staffers.

Paidhrin proposed a solution based on Microsoft's Active Directory and Imprivata's OneSign access management appliance. Active Directory serves as the hospital's single authentication data store, replacing several identity information repositories including Novell eDirectory Server; a Remote Authentication Dial-In User Service (Radius) server, which provides authentication for remote users; and a McKesson proprietary password database to McKesson's core hospital information systems for patient, clinicial and financial applications.

Imprivata's OneSign allows users to swap myriad access codes for a single password.

Paidhrin wanted assurances that the solution would live up to expectations and win over users. The ability to provide network-level authorization, meanwhile, would help the hospital maintain compliance with the Health Insurance Portability and Accountability Act's patient data security requirements.

"We knew the user experience would make or break the tool in terms of its acceptance," Paidhrin recalls. "We had to make sure it was fast enough for them."

Physicians and nurses put single sign-on to the test in a 2005 technology demonstration. The testers were sufficiently impressed.

Paidhrin also sought the support of the hospital's top brass. He says Kerry Craig, then Southwest Washington's chief information officer, was the lead champion, supported by a director-level Information Security Council. The council, which includes Paidhrin, advises the hospital's executive staff on security issues. The team was sold on two factors: reduced log-on hassles and the solution's compliance with HIPAA.

Since the system was rolled out, starting in early 2006, it has given credence to the notion that time is money.

The system's pair of single-sign-on appliances cost around $100,000.

And, Paidhrin estimates time savings per log-on session at 15 to 30 seconds, or about 5 minutes or 18 to 25 cents per person per day, not including physicians. With 3,200 staffers working an estimated 240 days a year, Southwest Washington stands to save between $576 to $800 a day, or $138,000 to $192,000 a year, meaning that it saw a return in its investment in six months to one year.

"Every little bit adds up," Paidhrin concludes.

Next page: Layer 4: Network Access Control

Layer 4

: Network Access Control">Layer 4: Network Access Control

Network access control products NAC for short operate similarly to identity management applications: They aim to let trusted parties into the network. In the case of network access control, however, the parties involved are machines rather than people. NAC products check devices connecting to the network for vulnerabilities, admit those that pass muster and quarantine offending machines for remediation.

NAC vendors include Aventail, Cisco, ConSentry, Juniper, Microsoft, Nevis Networks and StillSecure. Pricing typically starts at $4,000 to $5,000.

The Upper Canada District School Board, based in Brockville, Ontario, turned to network access control to address several problems. For one, teachers and students often use their own laptops on the school's network. The unmanaged devices sometimes introduced malware infections when they connected to the school's network, and that led to network downtime.

"Teachers and students feel they own the network and I.T. resources, and want to bring their own devices into the network, connect and access all the resources," says Jeremy Hobbs, the school board's chief information officer.

Upper Canada selected Nevis Networks' LANenforcer to get a better handle on devices seeking network resources. The school district runs five Nevis NAC appliances in its data center, along with one LANsight system. LANsight provides centralized configuration and monitoring for the LANenforcer systems.

The appliances conduct what Nevis calls an "end-point integrity check" on devices requesting network access. This check assesses whether a device has up-to-date operating system patches and current antivirus software. Machines that fail the test are quarantined shunted to an isolated segment of the network until their vulnerabilitiesare rectified.

Hobbs cites end-point scanning as a key driver for the deployment. NAC vendors refer to this feature as a pre-admission control. But the technology is also important for keeping tabs on devices once they enter the network, a task vendors describe as post-admission control. Hobbs says he has seen evidence that students have used a range of hacking utilities including port scanners and password crackers to probe the school district's data center.

Upper Canada, however, plans to use network access control to blunt attempts to crack internal systems. The school district, according to Hobbs, intends to "round out our implementation of policies on the Nevis appliances to respond to hacking tools by shutting down the port in question for a predetermined period of time."

Hobbs says that having one's identity management house in order is critical before launching a NAC deployment. This is particularly true regarding post-admission control, which comes into play after admission is granted and role-based access is reviewed.

"Getting a grip on identity is essential," Hobbs explains, noting that a "granular understanding of user identity" drives the school district's access approach.

Two years ago, the district's schools operated as 120 Windows NT 4.0 domains, each with local authentication. Upper Canada has since adopted a centralized ID management system built around Microsoft's Active Directory and Identity Integration Server. The latter integrates with Active Directory, providing a single source of identity information. The Identity Integration Server provisions Active Directory accounts to network users, placing them in the appropriate security group.

LANenforcer serves as the school district's enforcement mechanism, using the identity information from Active Directory to permit or restrict access to applications. In this context, network access control becomes "a layer that differentiates access based on identity," Hobbs says.

But that layer would not have been as effective without the centralized identity store, Hobbs suggests. "I think it would have been a different experience if we hadn't already made that investment in identity-driven infrastructure," he says. "We get that much greater bang for the buck having done some of the homework behind the scenes."

Next page: Layer 5: Vulnerability Management

Layer 5

: Vulnerability Management">Layer 5: Vulnerability Management

Lines of defense are helpful, but it doesn't hurt to make the target smaller. Vulnerability management tools offer the potential to do just that. While network access control is focused on PCs and laptops, vulnerability assessment products cover a broader territory, scanning PCs, servers and network devices for missing security patches or botched configuration settings that could lead to an attack. The tools may be installed on PCs and servers, and are available as a bundled hardware/software appliance. Vulnerability assessment may also be purchased as a service. Code scanners review lines of software code to identify flaws an attacker could exploit.

Automated code analyzers let organizations build security into the software development process. Products from vendors such as Ounce Labs and Fortify Software look for design flaws in an application's source code, while vendors like Veracode analyze compiled binary code.

The objective of code analysis is to "reduce the attack surface of the application itself," says Matt Moynahan, chief executive officer of Burlington, Mass.-based Veracode. "You can't strip 100% of the risk out of an application there's not enough time or money to do it. But you can strip out the vast majority of risk and give perimeter defense a fighting chance."

Another component of vulnerability management: software for automating penetration tests. This technology gives organizations a view of enterprise networks and applications from an assailant's perspective.

Andre Gold, Continental Airlines' director of information security, says penetration testing helps the airline identify weakness in application design and security processes. The company also uses penetration testing to check for weaknesses in the security products it plans to purchase.

Vendors offering automated penetration testing products include Cenzic, Core Security, Immunity and Mu Security. The open-source Metasploit Project offers Metasploit Framework, for penetration testing.

Continental uses Core Security's Core Impact software to automate penetration tests. The product gathers information about the network to be tested, scans for TCP/IP port vulnerabilities, and catalogs the operating systems and services running on host systems. Core Impact then launches attacks, using information gleaned during the discovery phase.

Organizations tend to use penetration testing sparingly, typically once a year, due to cost outside consultants may charge $100,000 per test and the potential for network disruption.

Automated testing is considered faster than manual testing and less expensive than hiring a third party. Core Impact's annual licensing fee, for example, is $25,000. Manual testing, however, may be used to supplement tool-based reviews because it has the potential to "identify flaws in business logic that automated scanners are usually incapable of finding," according to the Open Web Application Security Project, a non-profit organization based in Columbia, Md., that focuses on software security.

By using an automated tool, Continental has been able to increase the frequency of penetration testing for a broader set of line-of-business applications such as Continental.com, its Web site, which generates $3 billion in sales, Gold says.

Continental also employs other testing methods to uncover security issues. The airline uses a black box approach to simulate an external attacker's perspective. An outside firm is hired to do the testing and is given no information about Continental's network, hence the black box label.

In-house tests using the Core Impact tool leverage insider information. Testers will consult data flow and system interconnect diagrams to target particular applications. Gold says the objective is to determine whether a weakness in one application can be exploited to infiltrate another system. Tests of this type simulate a malicious insider or an outsider with administrator-level access.

In an exercise, Continental discovered that one application contained a poorly designed user authentication mechanism. If that interface were exploited, the compromised system could be used to breach an application that contained data on about 42,000 Continental employees.

"If we hadn't run the test, we wouldn't have known about it," Gold says. The company remediated the security lapse.

But it's not enough to fix problems as they surface. Gold's security team also discusses its test findings with the affected parties. For example, if vulnerabilities in a given system stem from application design and programming, Gold sets up a meeting with the application's business unit sponsor.

The mistake that some organizations make, Gold points out, is to conduct a penetration test and focus on report generation. A report, presented without discussion, may end up on a shelf. "That is not the purpose of a penetration test," he says.

Next page: Pulling It All Together: Layer Integration

Pulling It All Together

: Layer Integration">Pulling It All Together: Layer Integration

The existence of myriad layers in the typical I.T. security strategy begs the question: Can they interact? The various security technologies have mostly acted in isolation over the years and continue to do so to a considerable degree, say I.T. managers and consultants.

"The struggle is being able to integrate and manage all those technologies as a unified defense as opposed to so many different point solutions in the enterprise," says Bell ICT's Moss.

Integration can be found within layers. At the perimeter, unified threat management appliances fill that role, combining firewall and intrusion prevention, among other functions. Consolidation also occurs at the host layer. Security suites from vendors such as McAfee and Symantec combine functions including antivirus, anti-spyware and identity protection.

Integration is trickier when using multiple vendors. While vendors have begun to build connections between their security offerings, customers still bump into limitations.

Take the case of Booz Allen Hamilton, a strategy and technology consulting firm based in McLean, Va. For vulnerability assessment, the firm uses nCircle Network Security's IP360, which has integration hooks into other products. Stan Kiyota, Booz Allen information security manager, says nCircle integrates with Remedy's help-desk system to smooth the job of addressing vulnerabilities once they surface. The linkage lets trouble tickets generated in nCircle flow into Remedy.

But there's a problem: "We don't use the help-desk software they nCircle happen to be partnered with," Kiyota says.

A class of technology called security information and event management software, or SIEM, promises to provide more coordination among security layers. These systems pull together security log data culled from a range of I.T. security systems and make them available to identify patterns.

Randy Barr, chief security officer at WebEx Communications, went to KlioSoft of Concord, Calif., for a SIEM tool to pull information from the event logs of its various devices to assess intrusion attempts and other security-related incidents. Those devices and systems include routers, firewalls, intrusion detection systems and content monitoring systems.

Minnesota CISO Buse also sees value in SIEM systems. The technology's correlation feature sifts through thousands of events to identify "a handful of things that are actually relevant," he says.

In some instances, the correlation job is assigned to an outside party. Darryl Lemecha, CIO at data broker ChoicePoint, says the company provides data from a vulnerability assessment, intrusion detection and patch management to a managed security services provider that analyzes the data.

Data correlation can bring insight into whether servers are properly patched to withstand a specific attack, as indicated by the intrusion detection system. Armed with this information, Lemecha says, ChoicePoint can choose to ignore some situations cases in which the company has the patches in place to fend off the detected attack and focus on those that are potentially more damaging.