One of the most frequently heard objections to the widespread enterprise adoption of containerized architectures such as Docker is that they lack an obvious mechanism or policy for maintaining security.
Today at the second annual North American developers’ conference in its short history, Docker Inc. attacked that assertion head-on. It even demonstrated a new component of the Docker ecosystem called Notary, whose purpose is to establish a baseline of trust that could prevent malicious containers from being injected into processes.
Notary RepublicIn the process of redesigning large portions of Docker for security, said Docker founder and CTO Solomon Hykes during Monday morning’s keynote, his team’s developers “have attacked one particular problem: trusted, cross-platform delivery of content.”
One of the major problems Docker faces, both as a company and as a concept, is escaping development test beds to become adopted in production systems within the enterprise. As many vendors here are wont to point out in the days leading up to this critical conference, what enterprises are afraid of is the lack of an obvious security mechanism.
Among the Docker faithful, such as they have become over just the last few years, the problem of trust in security as not been raised as much of an issue. But as Hykes told an audience of mostly developers, this is because developers who test Docker environments amongst one another, tend to trust each other anyway.
Notary is designed to serve as a filter for the distribution of containers and Docker-related content in a project, including and especially in the production phase. This way, only digitally signed content that has been entered into Notary’s registration system, gets passed into production.
“It’s platform-agnostic,” said Hykes, referring to the ability for containers and content that involve both Windows Server and Linux to be distributed and filtered. “Any content can be distributed by Notary: source code, [virtual machines], build packages, vacation pictures, whatever you want.”
I’m Talking to the Man in the MiddleIn practice, Notary acts as a “pipeline” inside of package distribution commands. This way, as Docker Inc. Security Lead Architect Diogo Mónica demonstrated, a component being pushed to production can be crosschecked and validated in the same step.
Mónica actually staged a man-in-the-middle attack on his own server, as a demonstration of how simple it is for a malicious user to leverage HTTP protocol to inject arbitrary code into a process – code that could include a simple command to a container to delete the file for its own kernel.
It was a scary demonstration, certainly at first. It showed how problematic security could be when the name of the origin domain of the running component, is what businesses rely upon to establish trust inside their Web servers. And it showed how fragile container security actually was, with respect to one of the most common use cases: Web service… at least up until Monday.
Adding Notary to the publication process enables the Notary server to process and register the component files internally, as part of the process of publishing them — making them “go live” as production software. In a continuous deployment scenario, this type of publishing could take place several times daily.
So when Mónica tried to inject potentially destructive code into an API call, Notary filtered it out as not being part of the service it registered. One of the most dangerous types of exploits (sadly, also a common one) was effectively thwarted.
I asked Docker’s Mónica whether Notary will, in his mind, eliminate from doubters’ minds the notion that containerized architectures are inherently insecure.
“Does Docker make the risk of your company lower? It absolutely does,” responded Mónica, although by way of rephrasing my question first.
“Docker has strong isolation features that, in conjunction with virtual machines, actually make your application and infrastructure a lot safer than ever before,” he continued. “The reality is, if you want defense-in-depth, you use all of the technologies you have at your disposal.”
Mónica contends that an existing application (what containerization advocates call “monolithic”), once deployed inside containers, will become safer. Their ability to raise their own privilege levels is minimized, and their capability to call sensitive system commands can be externally limited, he said.
By default, he went on, containers might not access devices unless expressly permitted to do so by the build files that create them. However, Mónica conceded, containers do expect to be contacted by other containers, which leads to the sensitivity that he himself exploited, and showed Notary thwarting. The settings are there, though, to turn down that sensitivity, he said.
Where’s the Little Green Check Mark?
What Docker lacks is a way of communicating the relative security state of its own system by way of a visual cue — for instance, a green checkmark inside a shield logo, or something that folks familiar with automated anti-virus tend to expect. Risk management professionals expect these visual cues, and much of what these people hear about Docker thus far does not come from development conferences.
How does Docker plan to convince these businesspeople that any migration on a scale akin to Docker’s is more of a security gain than a risk?
Mónica responded by arguing that Docker does not increase the “attack surface” for applications (their susceptibility to compromise) that are converted to run in containers.
“Docker is not a replacement for best practices,” he did concede. “We never tried to claim that. You should still use best practices. But the reality is — and I’ve had this conversation with a lot of banks — is, look, if you have an application, we can guarantee you that putting it inside an application is going to make it safer, because it’s more isolated. We’ve been showing that consistently, and we are able to demo it.”