We need a way, said marketing technologists attending CMSWire's DX Summit in Chicago last week, to get at the data describing customer experience. Shouldn’t that be as easy, I asked, as bringing all the stakeholders together into the same room?

Comprehending digital experience, most everyone attending DX Summit agreed, requires everyone responsible for that experience to at least experience the digits.

In organizations everywhere, IT and operations professionals, and some software developers, are capturing data in real-time about customer experiences. Yet it’s not being shared, for reasons that probably aren’t personal.

Some allege the barriers are cultural. But application performance monitoring firm New Relic suggests the solution may be bringing all the stakeholders together around a common interest.

One Really Big Cloud

Yesterday morning in San Francisco at its FutureStack 15 conference, New Relic executives introduced a completely cloud-based data store, collecting real-time performance data from clients’ web sites, accessible to IT, DevOps, software professionals, operations managers, marketing technologists and company executives through a browser-based portal.

“Why do you care about application performance? Why do you care about application health?” asked New Relic CEO Lew Cirne, to a room full of people who clearly care about both. “At the end of the day, it’s all about, is the customer having a good experience?

“It’s not how many errors-per-minute,” Cirne continued. “It’s how many customers are having errors, and what are the impacts on the customer experience, as to whether or not they’re coming back.”

We introduced CMSWire readers to New Relic earlier this year. It produces a performance monitoring system based around an “agent” that is injected into programs, including the servers that provide them to users, but also the browser-side code with which users interact directly.

These agents, which New Relic calls APM, echo the signals produced by browsers and by server software. Historically, those signals have been captured by servers and stored in comparatively colossal databases.

With a myriad of factors, including cloud dynamics, mobile usage models and microservices, radically changing the architecture of applications, anyone who wants to remain in the APM market for much longer had better change its tune.

Last month, CMSWire introduced you to Dynatrace, whose strategy for bringing its own APM data into the marketing and executive discussions involves the creation of a customer experience index.

New Relic’s strategy also involves dashboards, although it has chosen to leave the formulas for extrapolating satisfaction levels to its users to determine for themselves. Today, this company is focusing its own customers’ attention outside their own data centers (where New Relic data had been stored up to now), and onto a cloud where departments might access it more readily.

Single File

Decades ago, organizational departments had their own technology budgets, Cirne explained, and for this reason they ended up collecting different aspects of customer-related data in separate, non-interoperable databases. Up until cloud-based SaaS became a reality — and even some time afterward — these departments simply built new projects and new reporting schemes onto the old projects.

“You had folks saying, ‘Sales are down, I can see that from a report from last week,’” said Cirne, “and there’s no connection to the fact that it’s related to an application performance problem or a server change or a customer experience issue.

“We think that has to stop. We think all this data belongs in one file format.”

The single cloud, Cirne suggests, eliminates the problem that corporate culture may have created, but technology actually exacerbated: hard-wired inaccessibility to critical customer data across departments.

New Relic has already deployed, and has been building on, a derivative of Structured Query Language called NRQL, for querying performance data in real-time. With New Relic’s Software Analytics Cloud, announced Thursday, users will be given a tool called, simply enough, the Visual Data Explorer.

With this tool, performance data about a web site being collected by APM agents throughout the process, is displayed on live histograms. Cirne presented a live demo that appeared to show Web page latency data from New Relic’s own servers, timed to within one second of the current time.

What Cirne hopes is that users of his company’s new analytics cloud will categorically drill down through layers of information. In so doing, the Explorer tool will effectively parse that drill-down process into a working NRQL query, which DevOps professionals can tweak, automate, and re-use.

Such queries, once they are discovered, would correlate the logistical and telemetric aspects of Web site communication, with real-world business transactional relevance. This is a big stretch, and let’s face it, there isn’t exactly a rich plethora of examples just yet.

New Relic CEO Lew Cirne at FutureStack 15
“In previous generations of technologies before the cloud, the only rational ways you could measure all this was to sample, or index, or throw away, or aggregate data,” the CEO explained. “You couldn’t capture everything all the time; it was just too much data to collect on-premise, certainly too much data to query in real-time on-premise in an arbitrary fashion.”

Oversight as a Service

What New Relic is trying to work out is a formula for selling its performance management, by way of analytics, as a service.

Its initial tack works like this: The company’s cloud will make one week’s worth of non-aggregated data available to customers for no additional cost, above their existing APM license fees. Firms that require longer histories than one week can negotiate with New Relic for rates.

Beyond the non-aggregated period, the analytics cloud will then condense older performance data, in the way it has had to do before for customers who stored their data entirely on-premise.

“If you want to de-aggregate all of this, you need to collect all of the data points for everything that happened,” Cirne told attendees. “One page load might trigger ten, fifty, a hundred events.”

Multiply those hundreds of events by the total numbers of pages being viewed by the total number of customers, he suggested, and you start to see the enormity of the problem. If one customer ends up not being able to make a purchase because of one database problem — not a hundred customers, but just one — then the only way to accurately diagnose the issue before it repeats itself, New Relic’s CEO suggested, is to be able to perceive the problem at the most granular level.

The unresolved issue, which for now New Relic leaves open, is how non-programmers, armed with a browser and a query command line, can find their way through a wilderness of data that the company’s CEO already characterizes as too big for an on-premise data center, to make the connections between an uncompleted online sale and an actionable solution.

We may learn more about this open question as FutureStack 15 in San Francisco continues.

For More Information: