E-commerce and Internet Security: Why Walls Don`t Work
Gunpowder, one of mankind's most disruptive innovations, made its European debut in the early part of the 14th century. Until then, "security" specialists had a simple, effective strategy: Build tall, thick walls to keep out enemies. An entire economic ecosystem had grown up around that strategy because it worked (if you could afford it).
Gunpowder changed all that. But it wasn't until the middle of the 20th century that the military strategies of nation states really evolved past the "tall, thick walls" approach.
We can draw similar analogies today when it comes to digital security. We know that cyber-warfare-- and its juvenile training ground cyber-crime--will be major problems. Yet, our strategy, at least for now, is to continue building taller-than-ever, thicker-than-ever digital walls around our data.
The digital economy that's being erected on the foundation of the Internet is nearly impossible to build "walls" around. It's too large and complex. (For more, check out Melanie Mitchell's 2009 book Complexity: A Guided Tour, published by Oxford University Press.)
Even more critical is the fact that complex systems can exhibit chaotic behavior under a range of circumstances because they can't be modeled statistically. Critical aspects of the Internet are always changing. Patterns of system load, available capacity and routing performance all fluctuate continuously and unpredictably.
While we can build and monitor statistical models for the "normal" behavior of the Net, the outliers catch us off guard. Control systems can become increasingly smart at dealing with many of these variables, but variables will multiply apace. From a security perspective, this adds significantly to the challenges we face. How do we tell the difference between a rare system outlier event and a deliberate attempt to push the Net into instability?
What we're facing is more akin to infectious disease control than it is to warfare: So how do we design an "immune system" for global e-commerce? How do we design a system that will constantly seek "pathogens" and create appropriate "antibodies" to kill them, without taking the network down with something akin to digital cancer or auto-immune disease?
One of the preferred historic defenses for the Internet was separation and dispersal. Lots of little ISPs and carriers might not be efficient (or easy to optimize), but it's hard to take down all of them at once. Many different engineering approaches--implemented in hardware and software--help protect the Internet from a "monoculture" attack in which a single vector can infect everything. Web 2.0 or 3.0 doesn't have to change that, but, in the name of standards and efficiency, it will probably try to.
Then there's the challenge of signal-to-noise ratio. If everything on the Web signals its condition all the time (which is increasingly the case), a good defense requires you to sift through a lot of normal data to find the things you need to pay attention to. When something critical fails and sends out a warning, it quickly impacts its neighbors, so they send out warnings too.
Before you know it, you have a blizzard of warnings, making it hard to detect the root cause and take action. Attackers will know this, so triggering false positives can be almost as effective as a real attack: It's just another kind of denial of service.
I see the potential that a network of global e-commerce represents. But I'm concerned that there's too much digital gunpowder for even the strongest, thickest walls to keep out.
John Parkinson is the head of the Global Program Management Office at AXIS Capital. He has been a technology executive, strategist, consultant and author for 25 years.