At some point soon, you and your colleagues will likely wonder whether you need to make a fundamental change to your IT infrastructure ... at least if you want your organization to be effective and remain competitive.
You might think you’ve already had that discussion, but you’ll have it again soon. And thanks to news that broke yesterday, “soon” just became “sooner.”
There was an acquisition in the technology space Wednesday morning which, if you were standing in the room where it happened, you might not think was all that important: Docker, an open platform for distributed applications, acquired a startup made up of developers who had recently worked with firms such as Red Hat. The firm is called SocketPlane.
Averting a Bottleneck
Outside of developers’ circles, it may not be obvious why this is so important. So let me explain. Docker is the most important implementation thus far of a technology that lets organizations deploy new types of applications as small containers — virtual machines that can run in the cloud or on-premises under any major version of Linux, and perhaps within the next few months, the newest Windows Server as well.
These containers are easy to provision and easy to write programs for. Businesses can write those programs for themselves, rather than rely upon vendors to do it for them.
Developers may use the languages of their choice. And instead of colossal, monolithic applications that take years to update, these adjustments take mere minutes.
Conventional applications are made up of components that communicate with each other internally. By comparison, applications built for Docker are made up of containers that use APIs to communicate, share data and delegate workloads.
Those APIs are made up of function calls that involve Internet Protocol. This means they don’t communicate internally like regular applications, but more overtly, over a network.
This isn’t such a problem now. But Docker is extremely new. And since applications written for Docker are scalable, this could become a problem later.
New World Order
In the old world, an application was installed on a server. If you needed to increase the workload, you added a new server, installed the application there and governed them with a load balancer.
Bigger workloads, more servers. The advent of virtualization, with systems from VMware and others, replaced physical servers with virtual ones, often running inside a single physical server’s memory space. But that didn’t solve the basic workload scalability problem.
With microservices architecture, the way you increase workload is not by duplicating entire servers but instead by provisioning just the services you need — in Docker’s case, more containers.
Imagine General Motors finding itself with a rapidly increasing demand for automobiles. Rather than replicate another Chevrolet or Buick, it might instead consider building a new factory and hiring more workers. Docker is more sensible in just that way.
Except there’s one problem. New workers, if you will, communicate over the Internet. So if you have an application that must scale up dramatically, the replication of conceivably thousands of new containers could lead to a congested network.
Why SocketPlane Matters
Enter SocketPlane. It was founded just last October, with the mission of virtualizing the network in such a way as to overcome this congestion problem, specifically for Docker.
As SocketPlane vice president John Willis explained to The New Stack’s Alex Williams last December, the first generation of virtualized networks addressed this problem in a way that was coupled to specific hardware to a great extent.
The second generation enabled cloud computing, thus eliminating the hardware dependencies. The data plane, on which the content of a network travels, could be manipulated by software. But by “software,” we typically meant the operating system or a driver managed by an OS.
SocketPlane is building a kind of segmented SDN that uses Internet Protocol, but that exists entirely inside the Docker context. Containers still communicate with one another, but not on the Internet and not through an OS. This means they bypass the network controller — even the virtual, software-based kind produced by Brocade.
At least theoretically, software built to run in Docker containers can scale up without the bottlenecks. Docker’s acquisition of SocketPlane effectively funds that group’s segmented SDN efforts, ensuring that it can continue to work with the open source community to achieve its goals.
So that’s one bullet dodged. Other than that, what does this mean for you?
Change is Near
When you consider cloud architecture for your data center — whether you’re leasing virtual machines from Amazon or servers from Rackspace or you’re using your own servers on-premise — you’re considering a change from immutable applications that are rooted to particular servers in defined locations, to flexible workloads that can move between processors separated by the space of a planet.
One developer within your organization may be enabled to make whatever changes your software may require, whenever it’s required. Businesses were prevented from doing this before by the architectures of servers and operating systems.
When Docker incorporates SocketPlane technology sometime in the near future, one of your developers could conceivably model your entire business network inside the space of a single computer — for instance, her laptop.
She can make changes and test them there. Then she can deploy these changes within the corporate network gradually instead of all-at-once, with the ability to withdraw any change that does not behave as it should. The scaling of the network would take place entirely within the Docker context, as the application may require, without bothering the corporate network or the cloud network.
Calculate the expenses you’ve already suffered simply contemplating a migration between CMS, BPM, ERP or CRM applications. And imagine not reducing, but eliminating those costs in their entirety. That’s the prospect made feasible by yesterday's SocketPlane news.