The promise of microservices for the enterprise (today) is this: If the parts of an application could be staged separately, on whatever cloud platform or on-premises platform you have on hand, then you could continually optimize that application for cost.

Theoretically, a content management system (CMS) — or whatever platform stages your customer-facing content — could scale up slightly when traffic increases slightly. It could scale down over a holiday or a weekend, or whatever seasonal or regular period you typically see lower traffic.

And you could automate the scaling process, rather than convene an emergency confab and execute a critical scheduling decision every time your content becomes successful.

That’s the theory ... today. But something very big has to change to make this feasible: the underlying architecture of your data center, whether your organization owns it, leases it, or stages it on a public cloud.

A Taste of Microservices

Of course, vendors in the infrastructure business would be more than happy to oblige. And major names in the technology field are extremely happy to acquire such businesses and eke out a sizable share of revenue from these transitions.

But let’s not kid ourselves by ignoring the fact that the journey from here to there would be monumental.

So vendors are looking for a way to give enterprises a taste of microservices — or, in certain cases, a taste of something that tastes close enough to microservices, for now — in an effort to build a market in advance of establishing a certifiable platform for actually abandoning monolithic applications and migrating to something similar to what Netflix uses.

Forrester analyst Randy Heffner told CMSWire a story about a client who told him, “When I saw something on the Etsy architecture and what’s behind there or something on Amazon, it was like, oh my gosh, that’s way beyond us! We can’t do that!’

“But here’s something that’s like, ‘Ah, I get it! And I see how I can be investing in that, in a way that’s in line with business benefits — not just an investment in architectural glory.’”

Do-It-Yourself Microservices

Example No. 1: “When you take Netflix’s fifty things in its technology,” said Richard Li, the CEO and founder of Boston-based startup, “and try to figure out what are the five things you need to start with, it’s not entirely obvious.”

Datawire is a Platform-as-a-Service (PaaS) that supports a gradual development model for features of an application in incremental or even random stages.

Since an enterprise may be developing multiple customer-facing web apps for itself that mostly do the same thing, with slightly different skins and content, Datawire suggests the common elements be created first, and made to interact with whatever specific implementation should come along later.

“We have a free and open source product that lets you start building microservices very quickly,” Li said, “in a way that’s fully compatible with your existing technology. So we provide a very smooth transition path into adopting microservices.”

Convergence Through Divergence

Example No. 2: Last summer, big data platform provider MapR unveiled what it called a Converged Data Platform, which it describes as a microservices platform for supporting multiple types of data sources — databases, data stores, data streams. The microservices element comes into play on the server side, as a system that exposes its functionality to developers using a vendor-neutral API.

This way, a mobile app or a CMS front-end can address data in a non-specific, vendor-agnostic fashion, and the platform can scale as it needs to facilitate such requests.

“All of the complexity of where the data is, is handled by our Converged Data Platform,” MapR Senior Vice President Jack Norris told CMSWire. “And the microservices themselves are connected through this event-driven framework.”

In other words, the back-end framework is not “wedded” to any one data platform’s mechanism. Rather, it sits and waits for a signal and responds to it — whether that signal comes from SQL Server or Cassandra or a component of Hadoop.

From a developer’s perspective, the new MapR platform inverts the proverbial pyramid, so the data store is no longer “big data” but instead a single, small “mount point” — a point of contact for the exchange of data with whatever’s behind it. The client application or Web app doesn’t need to know, or care, how “big” that data may be.

It’s microservices, in at least one sense. MapR’s platform does pave the way for an eventual, fully scalable back-end system; and it points itself in the general direction of an abandonment of the old data monolith.


Example No. 3 involves an emerging concept from the software development world that’s being kicked around by enterprise vendors in recent days: so-called serverless architecture.

It isn’t really serverless at all. Rather, it’s a way of programming a function designed to be distributed through the Web, in such a way that where that function is, or what is serving up that function, doesn’t have to be addressed by the programmer. One example is AWS Lambda, an Amazon platform that effectively stages raw code.

Of course, if you’re not the programmer yourself, what’s the big benefit to you? Quickly, service providers have concocted a response: Serverless architecture is a way to serve up microservices architecture in bite-size nuggets.

If you remember the old “Let’s Make a Deal,” where Monty Hall would open up a colorful curtain to reveal a shiny, closed box, you get the idea.

But by juxtaposing these two watchwords, are vendors and service providers actually excluding the portion of microservices that gives it its full value proposition — namely, the cultural and business benefits?

“Microservices is about having a separation of concerns,” said CoreOS CTO Brandon Philips, “and enabling individual teams to focus on specific problems.” CoreOS produces a container-based platform that competes against Docker, called rkt (pronounced “Rocket”).

In a standard serverless architecture such as Lambda, explained Philips, the code is dispensed to the server (yes, there’s a server in serverless systems) and its deployment methodology is forgotten. In a typical microservices environment, by contrast, all the servers are carefully and systematically orchestrated — in CoreOS’ case, by a component called Tectonic. It’s this orchestration, and the need for coordination, that helps bring development teams together with operations teams.


Meanwhile, serverless does tend to concentrate on the core job of moving business logic into production, Philips added, making it more “opinionated” than pure microservices in his view — in a good way. For example, a serverless platform tends to automatically scale the deployment and distribution of a service, based on the “ephemeral requests” it receives (such as a page load request). It has an “opinion” on how to do that, rather than instructions or automation worked out in advance.

“With microservices,” said Philips, “you’re still in control of how that application ends up getting built and deployed.”

So there’s an overarching philosophy about microservices as a whole, that could very well be sacrificed when it’s divvied up into parts, or even traded for a different one entirely. The microservices philosophy does deserve discussion: a complete attitude adjustment that could re-orient your development team away from branded monoliths such as your current CMS, and toward the individual, customer-facing functions your IT assets provide for them.

The cost for adopting that philosophy is pretty simple: Hand over your infrastructure as you know it today. And it’s that cost which makes the bite-sized option look better and better.

CMSWire’s Dom Nicastro contributed to this series on microservices.

For More Information:

Title image of a toy Volkswagen Microbus by Sigurdas, licensed under Creative Commons