Containers are quickly becoming a fixture of software deployments. Driven, at least in part, by the adoption of microservices, containers provide lightweight sandboxes in which to run software.
Unlike virtual machines governed by hypervisors such as Hyper-V, VMware or Oracle VirtualBox, containers do not need an entire operating system stack. A virtual machine emulates a bare metal server, everything else necessary to run software has to be layered on top. In exchange for the overhead of the virtual machine, software is given process and memory isolation from other software running in other virtual machines. Each operating system in the virtual machine is inaccessible from the others running on the server, and from the host operating system. Since problems with software running in one virtual machine can’t affect another, it’s safe to pack more software onto a single physical server.
Containing Security Risks in Containers
Containers don’t work the same way. They all run on the same host operating system. They provide process-level isolation but can access the host operating system. Containers protect processes in other containers from the bad behavior of others, and there are limits on the resources a process can consume.
Like virtual machines, containers virtualize the environment. Unlike virtual machines, containers all run on the same operating system. This is why it is not uncommon to see containers deployed within virtual machines. The virtual machine provides the operating system and container environment on which the containers can be deployed. Unfortunately, there is a lot of overhead associated with that arrangement.
Security issues do arise when running software in containers. A compromised container has the ability to affect the host operating system and, hence, the entire machine, physical or virtual, on which it is running. An entire industry is emerging around ensuring that threats against containers are addressed.
Docker (whose image and runtime environment are almost synonymous with containers), Google, IBM, Red Hat and others are all creating developer tools to help ensure the security of both container images and running containers. Moreover, tools for monitoring and managing container clusters, including Prometheus, Notary and SPIFFE (Secure Production Identity Framework for Everyone), are also designed to ensure that nonmalicious but poorly designed software doesn’t cause downtime in container clusters.
Related Article: Microservices Make Inroads: Reinventing Scalability
Dealing With the Security Trade-off
All of these efforts could help make up for a basic trade-off with containers: In order to be lightweight, containers can’t offer isolation and security akin to a virtual machine. While this is an intentional part of container design, it still inhibits the ability for some companies to use containers in production systems.
One solution, of course, is to not use containers at all. Stick with virtual machines for workloads that must be secure or in situations where you don’t want the failure of one workload to affect other systems. The downside to this is that virtual machines require a lot of resources and are not as transportable as containers.
A different solution is to rethink the container as a more secure form of virtualization without the need for a full operating system stack. This is the thrust of Kata Containers. A project within the OpenStack project, Kata is attempting to make a container runtime that is compliant with Open Containers Initiative (OCI) standards and is actually a lightweight virtual machine. Kata is, in essence, a virtual machine running in a container image.
Another option, Nabla containers, which are being championed by IBM, are also OCI-compliant containers that promise better security and isolation than standard containers. They do that by limiting the system calls a container can make to the host operating system kernel. The result is a container that is heavier than a standard one but still more lightweight than a virtual machine.
Related Article: What You Need to Know About Containerization
Two Branches of Virtual Evolution
The Nabla approach is more in line with the philosophy of a container. While it minimizes certain forms of attack and misbehavior on the part of applications within a container, it doesn’t try to reproduce the virtual machine setup.
A Kata container, on the other hand, appears much more like a virtual machine. It literally creates a QEMU/KVM virtual machine for each container or Kubernetes pod and uses a hypervisor. (QEMU and KVM stand for “quick emulator” and “kernel-based virtual machine.”) Nabla, in contrast, behaves as all containers do but with limitations that compromise security.
Kata and Nabla represent two different clades in the virtualization tree of life. The common ancestor is the current Linux container, but their adaptations are very different. Kata brings containers back to a more heavyweight approach, reminiscent of its virtual machine ancestor. Nabla, on the other hand, extends the container family into a more secure branch that tries to further limit access to the host kernel without the need for a full virtual machine.
Software development, like evolution, can take unexpected twists and turns. It’s not always clear what will make one organism or software more successful than another. Even though Kata seems retrograde, it may be the only way to attain enough isolation for some applications. On the other hand, Nabla, by keeping with the core philosophy of containers, continues the evolution of containers as a way to provide enough isolation without reproducing the virtual machine.
Related Article: Boring (but Important) Topics for Developers to Watch in 2018
Failure Is an Option
Yet another path is that both technologies may fail as companies realize that containers should only be used when the isolation and security of a virtual machine are not needed. The dominance of current container heavyweights Docker, Google and Red Hat may make it difficult for new types of containers to gain a foothold, even if they are OCI-compliant.
Unfortunately, that scenario is most likely. With software, survival of the fittest doesn’t always mean survival of the best solution.