How many times has this happened to you: You’re on your way overseas to cover a major news event in your field, when you’re informed by email that the plane you're about to board has discontinued service?

Have a nice day, you’re told.

So you, and a few hundred others who had plans to board this plane a half-hour earlier, are instructed to wait in a queue in a special room that has been custom-fitted for the queue-waiting experience. There are linoleum chairs, no benches and ropes that steer you in a maze just to help you maintain consciousness.

All that’s missing is the portrait of Hugo Chavez.

Then in frustration, the people who run the airline to which you were connecting escort you to their specialist, who is the one person in the organization who knows how to get things done.

Her name is Lilia. Lilia knows all the phone numbers of all the people who can press the right buttons. She knows the precise tone of voice to use with each person whose voice appears on the line. She knows whom to appease and whom to threaten.

God bless Lilia. In the end, it’s a human being who cuts through all the red tape that the airline’s automation systems create for it.

(Though Lilia could not explain how three international flights “exit service” in the same hour.)

Or how about this? “Your flight just landed and it lands early, and you end up sitting on the tarmac because your gate isn’t ready,” suggested Hewlett Packard Enterprise CTO Martin Fink, during a keynote session last week at HPE Discover 2015 in London.

“And you think to yourself, how hard could this be? Just turn the plane and park. Because you look out the window, and there’s plenty of open gates,” Fink continued.

“The reality is, this is an extremely hard problem to orchestrate.”

How Hard Could It Be?

When the customer experience that comes from an airline or a partnership of airlines, is bad ... it’s terrible. But the reasons are logistical, dealing with the exchange of data between systems that are ill-prepared to handle this much human traffic.

For years, the solution to this dilemma has involved adopting a new style of database — one that enables graph analytics. There’s a 1-in-2 chance your organization has already done so to some extent, according to a recent survey.

A graph database is fundamentally different from a conventional relational database, which stores records of related data in tables. With a graph, each element of data is a node that relates, in some accurately described and recorded fashion, to any number of other nodes.

A commuter, such as you or I, may have a record someplace that’s represented by a node. Your travel plans may associate you with a plane that has a travel itinerary of its own. Any number of planes may be associated with an airport where they plan to land, with an airline that owns them, and with a repair record that hints as to their serviceability.

HPE CTO Martin Fink“What if you could take every pilot, every flight attendant, every single plane, every baggage handler, every gate handler, for every gate, for every airport in the world, and put it in memory all at the same time, in one graph?” suggested HPE’s CTO.

A graph analytics application for a single airport would seem feasible enough, until you remembered that planes aren’t limited to single airports — only single planets. Fink raises the issue that a graph analytics application capable of resolving commuter travel issues has to apply itself to the entire world, or it isn’t really feasible.

A developer of a real-time, role-playing simulation understands this very problem: Gamers phase in and out of simulated universes, in precisely the way that planes don’t.

A graph analytics application that limited itself to any subset of the world at large, would have to take account of the fact that planes, pilots, flight crew, staff, and commuters would all phase in and out of the universe, adding a computational problem to the application that would actually be more difficult to solve than just representing everybody... assuming the computing platforms were ready for that job.

HPE is suggesting they’re not, at least not yet. It’s been hinting at a future architecture for a class of server that, right now, it’s only willing to refer to as “The Machine,” like the subject of a sci-fi story.

Deep Thought

“The vast pool of non-volatile memory at the heart of The Machine,” explained Fink, “lets you help pre-compute the scenarios of the future, so that all of your assets are there in real time.” Thus the records of previous weather events, such as a blizzard, at a particular airport could be applied to future scenarios in the background.

“So when the event actually occurs,” he said, “all you do is look up the solution, tweak it a little bit, and you’re good to go.”

A little less vaguely put, perhaps: A server pool with a huge amount of memory could actually solve for problems that have yet to happen. For instance, it could resolve alternate commuter schedules in the event a plane must be removed from service, having predicted the likelihood that such a removal would take place, when it would happen, and at which airport the plane would be stuck.

It could then look up hotel availability, and perhaps predict whether individual hotels could be completely booked by the time the removal event took place.

Such a system could be ready at a moment’s notice to react to such an event. So rather than an e-mail that politely, but swiftly, leaves a commuter to his own devices, forcing him to sweat through the rope mazes of a virtual refugee camp, an attendant could meet the commuter at the gate (let’s call this attendant “Lilia”), inform the commuter of the trouble, and show him a nice tablet PC showing a list of immediate options.

An attendant meeting up with a few dozen such commuters, could probably rebook all of them within the hour.

It’s a consummation devoutly to be wished. The trouble is, however, HPE is suggesting that only The Machine will be able to handle the job.

Today’s fastest HPE server, said Fink, is capable of monitoring 50,000 events per second (50,000 things that happen in the real world), and can store five minutes’ worth of events. The Machine, by comparison, would 10 million events per second, with enough memory to store 14 days of events.

“This is what a 640 terabyte machine allows you to do,” he said.

Calculating the Alternatives

Not everyone agrees with HPE’s notion that the wheel must be completely reinvented. Graphics chip maker Nvidia is gearing up to produce a new class of GPU, code-named Pascal, that would tackle the graph analytics problem not by adding colossal memory pools but by radically expanding bandwidths.

A GPU is designed to execute the same type of instruction several thousand times in parallel. While that enables instantaneous rendering of 3D scenes, this scheme also can be leveraged to produce graphs of nodes in configuration space in much the same way.

As Nvidia engineer Larry Brown argued in a presentation last June [PDF], greatly enhanced bandwidth enables more efficient analytics applications to find solutions and discard alternatives. As a result, a Pascal-class (or GeForce 1000-class, to use the eventual consumer brand) GPU with one-eighth the cache and one-eighth the main memory of an Intel Sandy Bridge-class CPU, but with ten times the bandwidth, could accomplish the same task.

Certainly supercomputers are built today using hundreds of CPUs and GPUs, operating in parallel. Though supercomputers are given cute pet names, their architecture is often startlingly unsophisticated.

HPE’s competing architecture will be called fabric-attached memory, referring to the use of network interconnects to directly link memory. The manufacturer will soon be releasing for developers’ experimentation something it calls the Fabric-Attached Memory Emulator (probably just because it wanted to claim that acronym), to give organizations their first glimpse at programming future graph analytics applications, and other tasks, using HPE’s cloud.

The problem HPE may still have yet to solve is this: Rip-and-replace solutions for large-scale data center architectures have never succeeded in the market — not once.

If HPE truly expects organizations to invest in The Machine, it will need some strategy for enabling them to invest in “Part of The Machine,” if you will, and then “The Next Part,” at its own pace — integrating The Machine with The Existing System along the way.

In the meantime, commuters will continue to be faced with disappearing planes, and nice people like Lilia will be plagued with potentially furious customers. Perhaps HPE could plug the scenario into The Machine, and render a prediction of just how long this plague will go on.

For More Information:

Creative Commons Creative Commons Attribution-Share Alike 2.0 Generic License Title image by gagilas