Last week Red Hat announced a collaboration agreement with Google that will soon make Red Hat’s OpenShift Dedicated platform available on the Google Cloud Platform in addition to Amazon Web Services.

Sure, there was plenty of fanfare to punctuate the announcement, but that wasn’t really the important part.

If Google is to make its Cloud Platform more competitive against AWS and Microsoft Azure, in what has clearly now boiled down to a three-way race for the public cloud space, it needs to open up GCP as a conduit for integration — for making the software products we used to call “platforms” communicate with one another.

Middleware is one way to go about this. Granted, there are software vendors struggling to distinguish themselves in new environments, that are taking bold, anti-middleware stances just to stay in the headlines.

However, there is a valid argument to be made that modern data center architectures, such as microservices, render the original concept of middleware obsolete.

Yet the counter-argument is just as impressive: Old software isn’t going away.

Back on the Bus

Even if it gets repackaged and redeployed in new ways, CRMs and ERPs and CMSs will need to be represented someplace in the orchestration scheme, in order to make coordinate their services and make their data interoperable.

Since the turn of the century, that’s been the role of the enterprise service bus (ESB) — a means for integrating multiple software platforms in real-time.

The public cloud won’t be a sustainable option for many large organizations, especially those on a certain publisher’s 500 list, if the software those organizations deploy in the cloud remain compartmentalized in the same virtual silos they inhabited when they are all hosted on-premise.

The ESB holds the promise that, wherever old and new software is hosted, developers can make that software interoperate and share data, even when they weren’t originally designed to do so in whatever century they were published. 

Yes, those developers typically have to be in-house, but Red Hat’s OpenShift platform makes itself more accessible to these in-house developers than the old enterprise portals, such as Tibco’s ActiveMatrix BusinessWorks or Oracle’s BEA AquaLogic.

Those old portals are still maintained as active products, because the old software they route together is still operational, including within large enterprises. If these enterprises are ever to move to the public cloud, something has to assume the role of middleware, whether we continue to call it that or not.

That’s why this latest deal between Red Hat and Google (not to mention others that may yet arise in a similar vein) is so important. It brings to the table a combination that could not have been considered before: Red Hat’s JBoss Enterprise Application Platform, including Red Hat’s Fuse ESB, operating in conjunction with Google Cloud Datastore, running software integrations for customers using modern containers and Kubernetes, in a managed service platform — where Red Hat manages the infrastructure.

“All of the middleware services — JBoss EAP and Fuse — are available on OpenShift, and they will be available on OpenShift Dedicated,” said Sathish Balakrishnan, Red Hat’s Director of OpenShift Online, in an interview with CMSWire.

Docking Maneuvers

Like Apollo and Soyuz, the docking maneuver between OpenShift and GCP has to unlock two doors in order for customers to make it work. 

As Balakrishnan told us, existing Google Cloud IaaS customers who are currently using Kubernetes for orchestration will be given the opportunity to leverage OpenShift Dedicated as a managed service offering — in this case, managed by Red Hat and not Google.

On the opposite side of the docking maneuver, existing OpenShift Dedicated customers will be seeing GCP as one deployment option, alongside Amazon.

The common element these two services share is Kubernetes. Google is the creator and project steward for Kubernetes. Red Hat recently completely retooled OpenShift’s entire architecture to replace its own, homegrown “cartridges” orchestrator with Kubernetes.

It’s this common element which Fuse leverages to enable Red Hat’s new, revamped vision of the enterprise service bus. As OpenShift Project Manager Diógenes Rettori told CMSWire, OpenShift Dedicated converts the instructions for the integration process from what used to be a Java package (a JAR file) into a Docker container.

“An integration route is a formalized means of connecting two different systems,” explained Rettori. “It’s a way of defining all the necessary steps that it takes to connect System A to System B.”

Rettori offered some common examples, where “System A” and “System B” are endpoints:

Suppose one application, in the interest of making its data interoperable, renders records as XML files. In Fuse, a route may encode the steps necessary to translate that XML into a JSON file, for the sake of another application that expects records to be shared using JSON.

Customer data from Salesforce can be translated into a format expected by an SAP application. In a more complex scenario, a Twitter stream can be parsed for specific types or examples of data being sought by a CX management platform.

“The way we have enabled Fuse to work on OpenShift is by specializing the way your routes are deployed,” said Rettori. “Each route that you create becomes an independent, deployable container — some say it becomes a microservice endpoint itself.”

This is what makes the concept of Fuse, born in the years when Dockers were khaki pants, work in the containerized, free-flowing universe of containers and Kubernetes.

This doesn’t make Fuse scripts as language-agnostic, or “polyglot” (to borrow a software development term), as OpenShift itself. Fuse script code continues to be written in a domain-specific language, examples of which include Java DSL, Fuse’s own brand of XML, and an alternative form called Spring XML, made with the Spring Java framework.

Red Hat OpenShift / JBoss Fuse integration

But as Balakrishnan pointed out, a developer or admin doesn’t actually have to write the DSL code directly, opting instead to use a visual tool such as OpenShift Origin [shown above]. Here, someone who handles the integration role in the organization can plot a flowchart depicting the steps taken during the data translation process, and the conditions certain steps may need to meet before proceeding.

The end result is this: Through OpenShift Dedicated, an organization can create a real-time integration path between applications that avoids the bottlenecks of stream processing through exclusive databases or data warehouses.

Instead of the traditional extract/transform/load (ETL) method of preparing data for integration, a simple parser can be provisioned on an as-needed basis. And if it’s needed a whole lot, then the orchestrator can scale that parser up and back down, just like any other container.

But that scaling up and down becomes a lot simpler through Google Cloud Platform, because it’s already geared for Kubernetes, which OpenShift now utilizes exclusively.

The Bus is Better

There remains some debate over whether the entire ESB concept is just an excuse to “duct-tape” old software to new platforms. And in addition, add to that the argument that moving JBoss Fuse ESB to a containerization model is just a new way to shrink-wrap the same old duct tape.

Shippable is a company in the CI/CD business; it produces a system for managing the rapid deployment and integration of containerized microservices at very large scales. It has every reason to shun ESB as a vestigial remnant of older business models.

But in an interview, Shippable CEO Avi Cavale told me, “I’m a big believer in Enterprise Service Bus. The reason why is because your applications need to have bounded contexts.”

Yes, it’s time for another technical term explained. Cavale offered an example of a typical enterprise that deploys around 100 line-of-business applications.

“If my one application has to know every single, other application to interact with it, then I don’t have bounded contexts. I have unbounded contexts, which means, every single change that occurs in any of these applications will affect me, and I need to constantly worry about them.”

Among Google’s existing services is a component called Google Cloud Pub/Sub (short for “publish/subscribe”), which is an asynchronous messaging publisher between endpoints represented in a network by HTTPS URLs, such as Diógenes Rittori’s “System A” and “System B.”

An asynchronous messaging scheme allows the sending app “A” to lay messages into a kind of inbox, for “B” to attend to in its own time. That scheme is perfectly fine, for data centers geared around only “A” and “B” where neither application needs to scale up or down to any large degree.

So when some folks call JBoss Fuse a “real ESB,” they do have a point: While it does permit a pub/sub scheme, among others, as Red Hat’s Balakrishnan told us, it also enables a more synchronous, conversational approach between endpoints — one that is far better suited for modern orchestration platforms.

“With ESB, you are using a common message platform to abstract yourself from all your other applications that could be sending you data,” explained Shippable’s Cavale. Exchanged data must follow the form and format specified by a kind of manifest or contract, and responses from contacted endpoints either meet expectations or are queried for clarification.

It’s this higher grade of platform that Red Hat’s Fuse enables on the Google platform, perhaps for the first time. And it’s the real reason why Red Hat’s new collaboration with Google actually is a big deal.

For More Information:

Title image: Carrick-a-Rede Rope Bridge, licensed under GNU Free Documentation License