Insecurity Rules: A Chronic Security Problem

I’ve worked on many projects that involved information security, but security has never been my focus. I’ve always relied on an appropriate group of security specialists to deal with the various security demands of my business technology solutions. We used simple checklists during project reviews and final acceptance tests to ensure that all security angles were covered; trained our engineers in good coding practices; and held project managers accountable for developing and managing to the security plan for each project. We also relied on a set of global security policies that covered items such as staff background checks and facility access that spanned every project.

Until the mid-1990s, the loosely coordinated but siloed approach worked well enough. Even in the intersection of the defense and healthcare industries, where I spent most of the first decade of my career, we never had a security breach in either business or clinical information sets, analog or digital. Still, I did learn a few things.

Hard Lessons Learned
First, people are a much greater source of vulnerabilities than poorly designed software or bad firewall settings are. Second, scrutiny by many engineers reveals more potential flaws than examination by just a few experts—but the experts tend to spot problems no one else does. Third, it’s not safe to assume that just because you got it right once you will always get it right. Security is a continuous process, which makes it a kind of “productivity tax” on engineering—and that’s a source of problems as well.

What changed in the mid-’90s was the scale of potential exposure. When we went from a few thousand users for most business software to hundreds of millions (aka the Internet), the workarounds we’d been using stopped working. I immediately started to (a) get paranoid about information security and (b) worry that I wasn’t paranoid enough. It was clear we needed to transition from ad hoc security frameworks to solidly engineered—and adaptable—security architectures. We also needed to think carefully about the risk model each architecture was designed to address. I worked for years with a simple mental model that I never articulated well until 2004, when I saw the following equation from Ira Winkler, president of the Internet Security Advisors Group and author of Spies Among Us:

Risk = å(Threat*vulnerability/countermeasures)* Value – over all threats and vulnerabilities

This model is important for several reasons:

  • It reminds security managers that, in addition to managing risk, they must assess the value component of the equation—if risk is high, value should be high as well.
  • It introduces the distinction between threats, which you generally you can’t do much about, and vulnerabilities, which you can and should do something about.
  • It introduces the concept of countermeasures—what you do to address your vulnerabilities (and not what you do about the threats).

The idea of countermeasures focused on vulnerabilities is critical. Remember, you generally can’t do much about the threats. A few good countermeasures, however, will go a long way toward eliminating or containing multiple vulnerabilities, provided you know what your vulnerabilities are. In most organizations, there will be an “optimization point” at which the next countermeasure won’t be worth the decrease in risk, or increase in value, that will result. (In lay terms, we call this the point of no return.)

From this model we can start to build some security policies, processes and practices:

  • Execute security planning across the entire organization. Create a culture of security awareness, but don’t make security everyone’s job, because then it becomes no one’s job.
  • Develop an integrated, active security infrastructure. If security is a natural part of the work environment, it won’t be ignored or subverted.
  • Set and manage return on investment of your security expenditure. Perfect security is essentially unachievable and unaffordable. The risk-to-value relationship lets us operate at a known and acceptable level of risk.
  • Manage your conformance to external regulations. Some of those regs may not seem to make sense, but at least you’ll stay out of court, out of jail and out of the news.
  • Implement security governance. “Quis custodiet ipsos custodies?” applies just as much today as it did in Roman times. Better to have clear oversight of the security environment than a closed and secretive group accountable to no one.

No matter how you slice it, this means a lot of work to do, let alone do well. So can we expect things to ever get better? Maybe, maybe not, but there are some principles that, if followed, will make improvement more probable, and which should be part of every company’s approach to information security.

  • Complexity hides vulnerability so simplicity is generally preferred from a security perspective, but…
  • Simplicity doesn’t mean homogeneity. Whatever there is most of will be attacked most and will have vulnerabilities exposed fastest, so it requires the strongest countermeasures.
  • Hygiene counts. Most organizations don’t bother with even the simplest countermeasures—storing master keys offline, for example—because they don’t think of it, or they want to avoid the (relatively small) inconvenience or expense and, commonly, they underestimate the risks they are running.
  • There is balance here that’s both difficult to achieve and necessarily dynamic. Clearly, prevention is best but we have to remember that we’re often dealing with imperfect objects—people and processes that are changing rapidly—so no matter how good we get, monitoring and detection will still be essential.

    Pay special attention to the things you must have but can’t control-outside suppliers, subcontractors, maintenance organizations, services outsourcers and so on. Contracts and service-level agreements should include security elements, but those are really only good for apportioning blame if something goes wrong. No matter what a contract says, you’ll be in the soup if there’s a security breach, so you’d better pay the closest attention.

    Of course, if you do detect a vulnerability, you have to do something about it or your detection won’t mean much. Too many organizations fail to act on detected vulnerabilities because they are embarrassed, afraid of the consequences or just too busy. And that’s just dumb.

    Finally, we should all insist that the technology we use not be part of the problem. Technology won’t eliminate every vulnerability or provide all the countermeasures we need, but there’s no excuse for it to be a vulnerability in itself, or worse yet, a threat.

    Stay worried.

    John Parkinson has been a business and technology consultant for more than 20 years.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles