If you only read the press release — or worse, if you only read the business press produced by people who only read the press release — you’d have gotten the impression that the likes of Microsoft, Google, HP, Cisco, Red Hat, and Goldman Sachs had all rallied together under a flag of truce to declare the existence of a new standard for virtualization that the whole world would agree upon forever. 

Said one broadcast journalist this week, "This is such a miracle because these companies don’t come together on anything."

Please erase everything you’ve read or seen in the business press from your short-term memory. (I have, and it’s refreshing.) I’ve spent the last several days in San Francisco speaking with the key participants in, and the main benefactors of, the formation of the Open Container Project. If you’ll indulge me, I can explain it all to you from the top.

Between Two Ecosystems

First of all, let’s eliminate this false assumption that these companies never come together on anything. The list of companies signing onto the Open Container Project is essentially a photocopy of the major contributors to the Linux Foundation.

They do come together, on Linux, for better or worse. Although Microsoft’s logo is not in that mix, it has actually been cooperating with the Linux Foundation on several projects since 2011.

Now to the heart of it:  Modern data center workloads, as you know, depend upon virtualization. A virtual machine (VM), such as we have come to know it, has been an operating system that was originally designed to run on a processor, but which is retooled to run on a hypervisor instead.

The applications in that VM make no presumptions about where they’re running. Hypervisors in a data center cluster are able to pass VMs between themselves, like teammates in a volleyball game.

This has made workloads run more efficiently, and has increased the utilization rate of processors — meaning, you can process more with less hardware. However, from a different perspective, VMs are inefficient. Each one creates an entire operating system environment to make applications think they’re running somewhere they’re not really running.

Containerization resolves this inefficiency, through a new kind of virtual machine designed to be managed within an operating system, as opposed to a typical hypervisor. This makes workloads denser and productivity more efficient yet again.

Or, at the very least, it should. Containerization remains largely in the experimental phase, even among data centers that claim they’ve adopted containers. Many of the operating characteristics that specify how a container environment should work, remain variables. Such an environment needs to become an “ecosystem,” like the market for Android apps or the distribution of plug-ins and support functions for Salesforce.

Come Together Right Now Over Me

The first problem, just as with every other kind of ecosystem, is getting everyone together on the same page. Apple has fewer problems forming ecosystems than other companies in creating ecosystems for its services. After Apple dictates terms, there’s an informal complaint process, which is typically ameliorated by the mutual realization that it’s Apple, and hey, what can you do?

Docker Inc. is the company almost solely responsible for the sudden success of containerization. It has created this market. There are a ton of other companies that, in recent months, claim to have actually created containerization long before we ever heard of it. The list of these companies reads like a photocopy of members of the Linux Foundation.

Claiming one created containers at this point in history would be like someone claiming in 1962, after Tupperware parties had already become all the rage, that he had also performed significant experiments with placing food inside things and deserves to have parties as well.

Let’s be serious:  Yes, everyone and his dog had the idea of using virtualization to compartmentalize workloads. Docker made the idea work in practice, and that’s what actually counts. Everyone else, please sit back down.

The problem is, this ton of other companies will not sit back down. Now that containerization is a “thing,” the major players in the industry need to agree to stop having arguments about it.

OCP: We’ve Got the Future Under Control

The Open Container Project is essentially an agreement between everyone involved in the development of containers to talk collaboratively. Their aim will be to establish a baseline container definition, in the interest of ensuring interoperability.

“This is an open door to anyone who wants to participate in this project,” said Docker Inc. CTO Solomon Hykes, during the Day 1 keynote address at DockerCon.

“The whole point is to get everyone around a table and to make sure we can find the best possible standard — a standard that doesn’t get in the way, but actually helps everyone implement the best possible tools.”

Like a VM, a container may represent a single application. But an application “encased” in that way does not easily become scalable on a cloud platform, if at all.

Applications designed to run in VMs rely on load balancers to delegate workloads to them and distribute them evenly, as they arrive through the network. But programs designed for containers may represent individual functions that scale up or down, the collection of which constitutes an application. You scale only what needs to be scaled.

Accomplishing this does not happen automatically. Put another way, unlike Tupperware, containers cannot exist in a vacuum. They need an orchestrator, which is exactly what it sounds like: the one in charge.

It seems fair enough that Docker Inc. should have the opportunity to build a Docker orchestration environment around Docker containers, and last February it did so. But for containers to become ubiquitous, they need to be supported by more than one company.

Google backs the Kubernetes project as a scheduling orchestrator for containers.  Meanwhile, a commercial entity called Mesosphere utilizes the Mesos open source technology to build what it calls a “data center operating systems,” like an OS that scale beyond processors or even clusters of processors.

Containers also need support systems, since they have to co-exist with one another in networks, and communicate with databases. It seems fair that Docker Inc. gets to build a Docker plug-in standard around Docker containers. But for anyone to take that standard seriously, it needs to evolve beyond the specifications of just one company.

So here is Docker Inc.’s dilemma:  To get the huge list of Linux Foundation members to back containers, Docker Inc. must step back a few paces, and surrender some of its presumed authority over the emerging standard.

The trick is, deciding how much to step back without ceding the ability to compete effectively against a company like IBM or VMware or Microsoft, any or all of which could produce their own container formats and start a sales war. (Microsoft is already producing Windows Server containers.)

Docker Inc. has been portrayed as a very successful startup, but that success is not assured long-term. It needs to monetize the ecosystem it has created. Third parties are capitalizing on the emerging Docker support market, and Docker Inc. actually has some catching up to do in this regard.

If the major players were to have some reason to abandon Docker — for example, if someone else backed by Google claimed to build better containers — then Docker Inc. would not have a healthy market from which to draw any profits whatsoever.

Building such a market requires coordination, which means taking this “open source” market and somehow prying it open. Docker Inc. is not in an Apple-like position where it can dictate terms. On the other hand, it does hold the keys to the kingdom.

It needs to exert the influence it has today to compel supporting players to cooperate, if it is to have any chance of holding any influence tomorrow.

Grounding Containers in Reality

So that’s why the OCP matters to Docker Inc. Of course, that’s not what my headline promised. Why does it matter to you?

As of today, there is no clear roadmap to guide the exodus between the VM-centric data center and the containerized data center. That’s critical. You’ve been told there’s a benefit to “containerizing your apps,” but then you’ve been told that this isn’t the real point: the real point is to build better apps. How do we do that?

In all honesty, we don’t really know. It’s easy to reinvent the wheel; what’s difficult is the process of moving workloads off of stones and onto wheels. Nobody has a plan for that.

Essentially all of the content and discussion at DockerCon centered on the topic of improving containers and constructing an ecosystem around them. Getting there, or moving from here to there, be not yet a major concern. Such a movement requires direction, and no one can say for certain which way that is yet.

The Open Container Project is the launching point for a discussion about direction. Here is where the organizations that say they listen to their customers, need someone to listen to.

They need to hear more about “virtual stall.” They need to see why the “virtualization tax” continues to bleed IT departments dry. They need to hear why too many abstraction layers results in lesser security, not greater.

Maybe you haven’t talked with your Docker Inc. representative lately, but your organization might have some ties with IBM, Cisco, Red Hat, Microsoft, EMC, Intel, Google, Pivotal or VMware. This is where you enter the picture.

Let’s be honest:  These companies really did not come together into any kind of forum to decide the roles, formats and purposes of virtual machines. And we’ve spent the last decade-and-a-half in a state of “disruption,” trying to piece everything together. Containerization is an effort to overcome many of the problems virtualization created.

Maybe you don’t have a roadmap either, but after a quarter-century of this disruption, there’s a good chance you’ve learned the way not to go.

Creative Commons Creative Commons Attribution-Share Alike 2.0 Generic License  Title image  by  BMiz.