Customer experience too often remains isolated from the technical metrics that drive it. As Aaron Rudger, senior director of product marketing at Dynatrace explained, “In the performance measurement domain, typically, we see a customer start with very technical metrics that are disconnected from customer experience."

Dynatrace recommends a shift in perspective. It urges customers to align the measurement of technical quality — the performance of all the technology that is being delivered — to customer experience.  

"That means, you need to understand your customers across all of the different channels with which they interface with you — desktop client, mobile device or some other point of consumption,” he added.

Defining Application Performance Management

In the first installment of this three-part series, we introduced you to the key application performance management (APM) problem facing most enterprises. In short, it's a people problem:  Software developers, network administrators, marketing professionals and customers all have substantially different concepts of digital performance.

One group may think it has a handle on all the factors that coalesce to produce a proper customer experience. Yet when that experience suffers, each group’s diagnosis is a product of its own exclusive context (e.g., resource consumption, bandwidth availability, page component fluidity and responsiveness, respectively).

So when it comes time to apply APM, the factors initially monitored are usually determined by whoever controls the budget for customer experience.

Heeding Yellow Flags

The typical APM process works like this: A series of automated agents is distributed throughout a network. Each agent sends signals when events happen, such as the start or completion of a web page download or the acquisition of a record from a data store, and a centralized monitor records those signals.

From there, the hope is this: An interpreter running on a central server should be able to ascertain when an anticipated sequence of those events has become slow or at least relatively slow. Ideally, the person who monitors this server doesn’t really have to know how a distributed network application is designed or constructed to comprehend when he’s being told something is “slow.”

Put another way, you shouldn’t have to be a developer to know the meaning of a yellow flag.

In the past few years, APMs have employed new, more graphical, means of communicating yellow flags or their symbolic equivalents. CIOs tend to be impressed by instantaneous information about slow transaction rates  — often more so than being continually informed when transaction rates are not slow.

The dilemma is this: Once negative APM signals have become boiled down, re-interpreted and transposed into a form that unfailingly gets an operator’s or admin’s attention, can that same data be conveyed to a developer or network operator in such a way that they can actually remedy the problem?

Connecting Business Value to Performance

For their part, analysts at Forrester have spotted the disconnect.  In a report published last February (PDF, registration required), Milan Hanson and James McCormick accurately identified it: “Businesses build applications to generate business value — to engage customers with content, to make a sale, etc. — not to deliver performance. Performance does impact these objectives, but quantifying the exact impact on the business is not an APM priority.”

McCormick and Hanson were framing the problem in a context that aligned with a solution offered by the report’s co-sponsor — a company called Soasta, which sells load testing and performance monitoring solutions. Soasta calls itself a provider of digital performance management (DPM) solutions rather than APM solutions: a symbolic means of shifting the performance problem from application events to business objectives.  

Forrester calls this transition a “tighter alignment with business requirements, and ever-changing customer needs and expectations.”

But this is the same alignment that business analysts have been recommending since the Watergate era. If only the IT department embraced real-world business objectives instead of its own infrastructural gobbledygook ...

In fairness, the divide between the IT department and business units has been shrinking. But then technology re-invents itself, and like a game of Sorry, we send all our pawns back to Start.

Virtual War Room

APM maker AppDynamics has tried a sort of invitational approach to bridging this gap. Its suite features a collaborative chat facility called a virtual war room, where the responsible parties for CX inspect the various performance indicators, projected on a giant (or giant-looking) custom dashboard. There, everyone may discuss what the key performance indicators (KPIs) mean and what to do about them.

In a recent demonstration video, AppDynamics engineers showed a web-based console with a kind of shared digital pegboard. Here, an operator can attach widgets showing live heuristic charts of website performance, chat with another operator, and refer to individual data points on charts using marks that all the other operators in the war room can see.

It’s a strategy we’ve seen before: If only we can get everyone together in the same room — even a virtual one — they should be able to knock out all the obstacles to success among themselves.

But success in this context still depends on whether everyone in the congregation can agree to speak the same language.

“What we find is, typically, you have a lot of human resource devoted to basically looking at whether lights are red or green,” Rudger said.  “Ultimately, what you really want to be able to do is move to a place where your ability to take action is automated as much as possible.”

In the third and final part of this series, we’ll discuss the challenges of deriving a plan of action from the ever-accruing mountain of performance data.

For More Information:

Title image of a yellow flag at the Indianapolis 500 practice by Scott Fulton