By Erik Blas
There’s been much ado about the uncertainty around securing containers, even as excitement around containerization platforms grows. As organizations begin to adopt containers more broadly, to include hosting applications that are business-critical, understanding the limitations of containers and security workarounds is a must.
A useful way to look at security risks with containers is to compare them to a traditional virtualization platform such as VMware. In VMware, each virtualized computing environment is completely isolated from any other virtual server, making it as secure as the physical machine. If a specific VM instance is attacked, you can wall it off and prevent widespread damage to other VMs on that same server. Containers utilize a slightly different approach to virtualization, providing additional performance and scale by sharing the Linux kernel across multiple container nodes. This form of virtualization can present security risks due to the shared kernel, as breaching a containerized kernel instance allows potential access to all the containers associated with it: the keys to the store, if you will.
This can mean that the breach can potentially gain access to all the other container instances on the shared kernel including the code and applications running on them. A large swath of the environment could potentially be exploited if the vulnerability is severe. If a virtual machine, on the other hand, is attacked in a similar fashion the breach may just be contained to that specific virtual machine instance without spreading across the VM farm.
Consider a typical real-world example: If an e-commerce shopping cart system running in a container on a Website is hacked, the malicious code could infiltrate the shopping cart system and send messages to all of the other containers on the shared kernel to wreak more havoc across the network or Website. If the shopping cart application is not containerized, the attack may only be able to access whatever the compromised user’s credentials can access.
What to do about it
There’s no immediate remedy for improving containerization security. The lowest-risk solution is to avoid using containers for any type of application with stringent security and compliance needs. Falling into this category would be certain content related to banks, insurance firms, health-care organizations, retailers and anyone else storing sensitive customer, financial or IP information in the application.
We believe that over time the open-source community will address this problem or at least provide a workaround. Unfortunately, making containers more secure is not a simple fix–in fact it’s an extremely technical problem to solve. It will likely require some variant of a sophisticated, potentially federated security model, wherein each container could have a token which will need to be invoked for access to the kernel.
In the meantime, here are a few considerations to bolster the security of containers:
1. Don’t forget the basics. You still have to make sure that the server OS and applications are locked down as you would do normally without containers: encrypted, always updated and with the proper access controls. Continues integration (CI) practices and tools can help with the always-updated mantra.
2. Create separate security profiles for each daemon. Each container does not need to have the same security access.
3. Use an orchestration platform: Marathon and Mesos are two of the common platforms that can automate access levels and remove human error.
Like every shiny new technology, containers are no different. They currently aren’t for every single situation and developers should be judicious about what they run on containers because of the known security risks. Containers aren’t yet a parity replacement for traditional virtualization tools. They are, however, a valuable new piece of the ecosystem appropriate for certain scenarios, such as in development and QA and to extend the scalability and flexibility for noncompliant data sets and applications.