To quite possibly no one’s surprise Wednesday, Microsoft made it official that its “dramatically refactored” Windows Server operating system will be entitled Nano Server. It further acknowledged a preview edition would be made available to testers within a few weeks.
Nano Server will be Microsoft’s minimalistic approach to serving applications, stripped down to the bare essence.
In itself, it's certainly not Windows because it will not have windows, mouse pointers, menu bars or anything else related to a graphical environment.
It's a back-end system and nothing else — a concession, at long last, to the fact that the only thing a server needs to do is manage its share of the workload.
However, as language from a post to the company’s TechNet blog Wednesday afternoon clearly indicates, Nano Server will not be a stand-alone product.
Like Server Core — Microsoft’s first attempt to address the needs of modern servers with a minimalized OS — it will evidently be delivered exclusively to licensees of Windows Server.
That post, co-authored by Microsoft Lead Architect Jeffrey Snover, describes Nano Server as “available in the next version of Windows Server.” It does not set a timeline for a general release.
With Microsoft’s Build developers’ conference in San Francisco set for April 29 and its Ignite conference in Chicago set for the following week, data center admins should be testing Nano Server in pre-production environments by this time next month.
The reason for Nano Server’s existence is complex, but critical. We commonly think of the operating system as the platform upon which applications run. In the data center, this is no longer true.
Today, a server operating system is merely the manager of processes running on a local processor. The OS has literally de-evolved from something like an ant farm to something more like an ant.
Back to Basics
Nano Server will be designed to run containerized applications — essentially, digitally shrink-wrapped virtual machines (VMs) containing all the dependencies and resources they require to run.
This is all that Nano Server will do. This way, containers may be deployed on any system that supports the container’s platform, without the applications inside them having to be “installed.”
Installation is one of the least loved jobs in the IT business, ranked down there along with “gorilla cage cleaner” and “reviewer of Kevin Costner films.”
In one respect, Nano Server will be leaner than the first operating systems ever made. It will lack the capability to produce any output at all — even a command line (CLI) — on a local display. Instead, it’s designed to network with a remote administrator console by way of PowerShell, Microsoft’s supremely versatile command-line tool.
In an environment where thousands of tiny OSes may run simultaneously, having a local display or a CLI for each one would be ridiculous. From a remote console, a PowerShell command line could conceivably address entire swaths of these OSes (Regiments? Divisions? Squadrons?) with a single command. An admin could scale up or down the amounts of running OSes within a cluster as needed.
With containerized applications and microservices ruling the proverbial roost, the operating system is no longer the master platform but a slave tool. It wouldn’t make sense to make Windows serve as the tool.
Hyper-V Plus Docker Containers
Since Microsoft has partnered with Docker, the industry’s leading container platform, Nano Server will run Docker containers.
It will also run existing Windows Server containers, as well as a new container format from Microsoft called Hyper-V Containers, the existence of which was revealed Wednesday.
“Hyper-V Containers will ensure code running in one container remains isolated and cannot impact the host operating system or other containers running on the same host,” reads a Wednesday blog post from Windows Server general manager Mike Neil.
(The Hyper-V brand is already in use as Microsoft’s virtualization platform, though as we may discover soon, Hyper-V VMs and Hyper-V containers may only share the brand between them.)
The virtue of isolation lies in how Hyper-V Containers supports microservices architecture, which is the ability for a new class of applications to run thousands of identical functions in parallel instead of one function in three or four sequences.
This makes the deployment and management of applications on cloud platforms (such as Microsoft’s own Azure) radically simpler.
If microservices waited upon one another to finish their tasks before they could continue, latencies would increase exponentially as the number of services increased linearly. With isolation, there are no dependencies to slow down execution.
“Doing Hyper-V Containers is a natural step from a computer science perspective and especially for a virtualization technology vendor like Microsoft,” said IDC program director for software development research Al Hilwa, in a comment to CMSWire Thursday.
“Finding a middle-point between a Docker-style container and a full VM is a natural evolution, but making [Hyper-V Containers] Docker-compatible is really helpful for developers and IT organizations looking for more choices in deployment with minimal added engineering.”
The objective is to put Microsoft back in a position where its products are providing the virtualization platform. The old-style, monolithic operating system can no longer do this, but agile, high-performance containers quite possibly can.
Evolving the Back End
When Microsoft first conceived Windows Server before the turn of the century, it was with the idea of achieving end-to-end Windows — a Microsoft operating system on every device you could possibly conceive.
(Except smartphones, tablets, MP3 players, embedded control systems, wearable devices, portable sensors, security appliances, point-of-sale terminals and electronic billboards. Those would come later.)
Because it was Windows, Windows Server was designed to run the same graphical UI as every consumer edition of Windows, as well as host applications and run client/server connections. This made connectivity, for a little while, almost impossibly slow.
When that “little while” expired, connectivity did become impossibly slow and radical changes were necessary. Virtualization was created to enable workloads to run in protected memory spaces and across multiple processors.
But remote administration of a graphical operating system with minimal bandwidth was akin to herding cats by loudspeaker via a remote lunar outpost. So Server Core was created: quite literally a version of Windows Server that maintained many of the same resources for a graphical UI, without the graphical UI.
Windows Server soon found itself competing against VMware to provide the virtualization platform for serving, ironically enough, Windows Server applications.
As virtualization made way for cloud architectures, both platforms sought to run CMS, CRM, ERP and major database-driven applications in the cloud — apps that, to this day, rely on Windows Server.
For Microsoft to beat VMware, Windows Server needed to support a feature called “live migration” — the seamless transition of workloads between processors.
For years, since Windows was designed to run on single processors, live migration never worked in Windows. In 2008, the feature was postponed.
And when Microsoft saw that one publication called the company out for continuing to postpone live migration, it issued its official opinion that live migration would only ever become useful anyway “on a very limited set of servers.”
It is for this reason that VMware seized the lead in virtual environments and never looked back. Windows Server’s failure to stay competitive made Linux ubiquitous in the cloud. It’s why in 2014, CEO Satya Nadella finally had to concede, “Microsoft [heart] Linux.”
But in 2015, it is VMware’s turn to be perceived as behind the times. The server world is moving on to microservice architectures and containerized applications.
Rather than attempting to re-invent the wheel and failing, this time, Microsoft partnered with Docker to enable containerization support in Windows Server and in the product we will come to know as Nano Server.
“Nano Server is interesting,” said Hilwa, “because it is especially designed for these sub-VM container environments. We are seeing options to do this in the Linux world with Red Hat Atomic and CoreOS, so it makes sense to see a Windows Server alternative for the large swath of companies in the Microsoft OS fold.
“Again, I see this as increasing deployment options for these customers and helping them move faster to the cloud,” he continued. “I think Microsoft is aware that the world will continue to run Linux and Windows Server.”
But if microservices can migrate from cloud to cloud, suppose Microsoft doesn’t have to occupy a platform in its entirety to lay claim to it. Suppose instead that its platform could travel in the cloud along with Linux, CoreOS, Atomic and whatever else.
Maybe it wouldn’t have to seize control to win. Maybe it only has to show up to the party.