At the conclusion of this article, you will have the opportunity to take part in a brief survey about your experience. If you wish to participate, say “Yes” or press 1 now.
How do you assure yourself that your organization’s customers or patrons or citizens have been treated to that “great customer experience” you’ve proclaimed to be your goal? Is there any chance you take a poll?
And if you do, does that poll ask customers to rate their experience, say, on a five-point scale?
I suspect you’ve had to take such a satisfaction poll more than once over the course of your life.
During one such time, while you were conjuring in your mind the meaning of a “5” experience versus that of a “1” experience, did you record for future reference the process by which you derived that scale for yourself?
For instance, was a “5” comparable to a vacation on St. Thomas Island, while a “1” something akin to the Al Gore / Jack Kemp vice presidential debate of 1996? (Who can forget that great Gore line, “Bob Dole never met a tax he didn’t hike.”)
What are the chances that any other customer would attribute the same meaning for that scale that you did?
Time and again, this publication has advised you to step into your customer’s shoes, to experience the world and your services as she perceives them, in order to fully understand the context in which that customer is conducting business with you.
A full five years ago now, our Gerry McGovern declared scale-based ratings methods “broken,” noting a scientific study showing that when you give people a scale of 1 to X, the largest plurality of folks to pick (X / 2), regardless of what the question is.
Why do you suppose that is? I would submit that when you ask your customer to define your metrics for you, their tendency will be to conjure a fanciful scale of infinite possibilities on both sides of the scale.
Reality is always smack-dad in the middle of two fanciful extremes. So their answers will always be as fanciful as their scales.
It therefore follows that the aggregate product of a variety of responses, none of which can possibly fit together on the same scale, will be a pointless variable. “Over 10,000 customers rated their experience on our Web site a 4 or better,” means absolutely nothing.
“To combat this problem,” writes FluidSurveys University, “it is useful to add descriptors either in the wording of the question or on the ends of your numbered scale. This will give respondents a better idea of what the scale represents to them and also allow the researcher to more accurately define the meaning of the data he/she is collecting.”
In other words, the only way we’re going to give this pointless scale any kind of collective context is if we narrow the scope so thinly that it can be described with a set of smiley-face emoticons. “5 = Very satisfied... 1 = Very dissatisfied.”
Realize what you do when you narrow your focus to this extent: At this point in time, you are no longer interested in the customer’s experience as a whole. You’ve settled for an emotional reaction, rather than an evaluation of the benefits to the person.
Ask yourself the following: If you were to present your company’s product or service to your mother, would you ask her to rate it on a 1-to-5 scale, with 1 meaning very dissatisfied, etc.?
Can’t you imagine your mom asking you, perhaps very pointedly, to nail down the meaning of the scale a bit more?
When you care about a person, there’s a good chance you already understand the context of her life and perhaps share it with her. If you’re cooking something for Mom and care about her opinion, you ask more pointed questions in search of a common understanding you already know is there.
My mother was, in the clinical sense of the word, a genius.
An art teacher for all of my life but an advertising manager before then, she understood the architecture of products and the construction of messages better than anyone I’ve met, including those currently in the marketing or advertising industries.
On a few occasions, a telephone survey taker asked her for her opinions about certain products. Mom would always preface her response with, “Now are you certain you’re really interested? Because if you’re just going to put me on hold, I’ll save my energy and paint a portrait with it.”
“Yes, ma’am, certainly I’m interested.” Then Mom would dive deep into the details of product design, spontaneously, unscripted, for over two hours.
She understood the way that boxes of instant cocoa mix should be assembled, so that the envelopes were perpendicular to the axis of entry, like a drawer or a Rolodex.
She believed insurance company brochures should be printed in color, but utilize indentations and white space along the left side to instill calm and order.
Yet she never gave a lecture in her life. Her approach was always conversational, asking questions, beseeching the survey taker to remember what things were like when he was young.
For instance, did he survive World War II? Did he serve in the military? Did he ever use the product with his own children?
By the time the session was over, she knew more about the surveyor than he did about her.
Anyone truly listening to Mom would receive the greatest understanding of customer experience they could ever hope to attain from a single person.
Mind you, this was well before the era of the web.
But I surmised from doing some listening of my own, that the surveyor had long ago concluded that Mom was an “outlier,” someone who couldn’t possibly represent the broad customer base... simply because she could articulate the experience so clearly.
The Right Signals
Last week, I spent three days listening to experts in the field of application performance monitoring, specifically from a company called Dynatrace.
Their system captures fine-grained metrics about how customers interact with their software, particularly when it’s being distributed to them via servers.
When an IT or DevOps professional presents a report about page load times or CPU utilization to the CIO, there’s a high chance it says nothing about customer experience that she’ll understand.
So for Dynatrace, there’s a sophisticated system of aggregating this multitude of signals, using formulas, rules and conditions to convert them into values that the CIO, or even the CEO, can more readily comprehend — customer conversion most importantly among them. Dynatrace calls the products of these rules “business transactions.”
There’s a strong third-party support network in place, made up of professionals who are Dynatrace users, who tune and perfect their business transaction formulas and who share the results with each other.
What I heard from several of them was loud and clear: Even after a decade of measuring performance this way, it’s very difficult for these professionals to deliver metrics to business managers and direct reports, in a form they’ll accept and incorporate into their business decisions.
Some of these business managers would prefer a scale of 1 to 5.
Indeed, some of these higher-ups have invested significant capital in mining social media, especially Twitter and Facebook, combing terabytes of text in search of signals that possible customers may possibly be sending to possible friends about, possibly, their products.
When the first Big Data systems were made commercially available, they were marketed as social media sifters — and, in a few cases, as nothing else.
I first started writing about customer sentiment analysis tools for social media when it seemed like science fiction. It was only four years ago.
Think about this with the depth of vision my own mother would have employed: What does it say about your company, when you invest time and resources in scientific analysis of the context of conversations and interactions that your customers are participating in, outside of your company’s domain, in search of signals that may be meaningful to your company?
What’s the matter? Don’t you trust the conversations and interactions and experiences that are taking place inside your domain?
And if you can’t, whose fault is that?
My mother understood more about the psychology of marketing than a female was permitted to exhibit in the 1950s.
She would be the first person to say that, if all this technology we’ve invested in and all these networks we’ve built fails to achieve the principal job of communicating with the people we do business with, then these systems are not really channels but walls.
If the numbers you’re reading about your customers’ experiences tell you nothing about who your customers are, then they’re telling you nothing that truly matters.