cx index

The goal for Dynatrace is to produce an application performance-monitoring platform that businesses will want, not just IT departments. If you’re in IT and thinking, “But I am the business,” then this article is for you.

At the start of the Tuesday keynote session at Dynatrace’s Perform 2015 conference in Orlando, CEO John Van Siclen announced, “APM has to evolve.”

His first volley toward that goal came in the form of an automated metric he hopes will draw new attention toward a decade-old performance monitoring tool that many businesses are surprised to learn they already have.

Dynatrace CEO
“It’s no longer operations and development,” said Van Siclen.

“It now includes the business, because the business owners, more than ever before, care about the real-time experience and behavior of what’s happening in these new applications of engagements that are exploding within our enterprise.”

Granted, some of Van Siclen’s metaphors were exploding as well, but his point was evident: There’s new classes of applications being made feasible by cloud platforms and by containerization.

Green Means Good

Dynatrace’s archrival, New Relic, has already made a play for this space.

In the meantime, Dynatrace was in the midst of a company reformation: a separation from corporate parent Compuware, coupled with an acquisition of one-time competitor Keynote.

To get back in the game and be mentioned in more of the same sentences with New Relic, Dynatrace is looking to entice the C-suite with a component of its visualization package that contains a “Customer Experience Index.”

It’s a way for all the thousands of little factors that comprise a user’s everyday experience with a Web application (or other distributed application) to be aggregated into a symbol that signifies what online customers must be feeling right this moment.

There’s no technical breakdown to it: It’s an emoticon.

“It’s not just an apdex score of page response times,” said Van Siclen, referring to a ratio frequently cited in the APM field of properly completed requests to requests placed.

“It’s actually much more sophisticated. It includes response times, of course, but for every tap, click and swipe, from every customer engaging your application around the globe, it includes errors and crashes. It includes the context and environment that the user is working from (for example, dial-up versus DSL). And it includes the concept of a whole visit.”

So it does have a sophisticated breakdown, using a formula that Dynatrace’s veteran users will appreciate.

But on the surface, it’s a big green smiley-face, yellow concerned face or red frowny-face.

Van Siclen’s introduction catalyzed some necessary discussions, throughout the Perform conference and well into yesterday, about the culture shifts that need to take place throughout organizations in order for “performance” to even have the same meaning.

The fact that the new Performance Index for Dynatrace version 6.3 isn’t really all that nuanced, led to more than a few chats among attendees here about whether a performance index is still meaningful if it has to be candy-coated for the CIO, or especially the CMO.

“Team Speed”

TIAA CREF Engineer Mat Pickering
The need for a culture shift among IT, business leaders, software developers and … what’s the word I’m looking for … oh yes, marketing, became the dominant theme of the entire conference.

“When we make a Web service 500 milliseconds faster, that’s really important to us as technologists, to move things forward, right?” asked Mat Pickering, the manager of the “Team Speed” performance group at financial services provider TIAA CREF.

“Then the business always says that special thing, when you’re so proud of yourself and you show them these great dashboards and these great things and all you’ve done to move this culture of speed forward: ‘So what?’”

Pickering told the story of how it built a very sophisticated performance monitoring front end, nicknamed THOR.

It’s an effort to wrap customer-facing performance metrics around multiple, selectable contexts, each of which may pertain to a different segment of the business.

Put another way, it’s an effort to do a performance index that looks more like an index than a smiley-face.

“Performance is a performance,” said Pickering.

“You’re selling all the time, whether you are a developer and you’re talking to the guy building your Web service, or you’re a manager and you’re talking to your team, or you’re talking to the business, or you are the business: You are selling performance. That’s what you do.”

Citing a phrase from author Alistair Croll, Pickering went on to say that such selling is made difficult, if not impossible, through the implementation of vanity metrics — numbers that may have been derived from real measurements, but which don’t actually impact what the customer is actually perceiving.

With exactly the same performance data Team Speed had already been collecting, Pickering showed that he could demonstrate the efficiency of his company’s transactions for any given interval of time. (We were not given permission to photograph THOR in action.)

Team Speed is not the IT department, nor is it the dev shop. It’s a separate operating unit of TIAA CREF that specializes in performance.

When something goes wrong, the IT department looks to Team Speed to resolve the issue. When something goes right, the business end looks to Speed to document why.

This may sound like a lot of pressure for one department to handle. As it turns out, Pickering believes that the automation of handing issue resolutions for both sides of the company ends up benefitting the business with efficiency and improved communication.

“You don’t use business analytics to solve these problems,” the Team Speed leader said, in one of the most important utterances of the entire Perform conference. “You have to drive it from the IT side.”

“Buy-in”

On Thursday afternoon, American Express Chief Cloud Architect Brian Davis spoke to that very issue: the driving influence necessary to facilitate communication about performance within the organization.

“In financial services — maintaining credit cards and those types of things — change is slow,” Davis admitted, just moments after taking the stage. “In financial services, we always think that we’re driving change.

“That’s really not true. The reality is, you’re driving change,” he said, referring to the IT and DevOps personnel and the “Team Speed” counterparts, in the audience. Without an ounce of shame, he added, “It takes technology companies to make change, to force financial companies to make change.”

Davis told the story about how he and his colleagues learned about the very existence of OpenStack, attending open source conferences where practitioners spoke a different language than they had ever heard before.

But he caught up, and within 18 months’ time, his Amex credit group had a working private cloud, based on OpenStack — a challenge, he admits, which “wasn’t as smooth as was hoped to be.”

Davis admits, he thought the cultural change had already happened in his organization. “We had to have buy-in and tie-in from all these different groups,” said Davis, “and everyone was all on-board and ready to go.”

Because the open source release cycle takes place in months rather than years, Amex was soon four releases behind, but catching up.

“The problem is, once there is a problem, resistance starts happening,” admitted the Chief Cloud Architect.

Like TIAA CREF’s “Team Speed,” Davis’ Amex cloud team was a kind of bridge between IT and business. But when the inevitable execution issues did impact performance, he said, IT asked his team to slow down.

“The resistance started to mount, and new technical issues started to arise during the deployment of these cloud services,” he confessed. Business managers and executives were warning that their customers were being put at risk simply by using the platform.

“We had to look at them and say, ‘You know what? We all agreed to this. This is our strategy… We’re trying to get off those legacy mentalities that you’re propagating right now.’

“That’s the hardest message to take to executives,” Davis said. “‘You know what? Sometimes it’s gonna hurt.’”

It was the panic that was more the problem than the actual performance issue, in Amex’ situation. Davis’ team was put in the situation of consoling business units, letting them know that they too held the customer with the utmost regard.

“The reality is, you don’t really have buy-in until someone has been impacted,” he stated. Engineering teams and monitoring teams, he suggested, need people capable of communicating the importance of sticking with the technology strategy, pushing the customer-centric vision from the middle up.

Dynatrace Perform 2015 concludes today. Thus far, it has been one of the most forthright venues for the open discussion of the importance of monitoring as a factor in customer experience, that I’ve seen this decade.

For More Information: