The Gist
- Don't rely on siloed data. While helpful, the reductive nature of siloed data must not be the determining factor of a customer’s experience.
- Changing the aim of metrics. With things like CSAT and contact center metrics, leaders should prioritize customer satisfaction over efficiency.
- Finding a broader scope. A complete understanding of CX requires not just one solid metric, but a combination of metrics that measure various parts of a customer’s journey.
Leaders and business owners have been concerned with customer service and satisfaction since the dawn of civilization. The ancient Greeks famously traded goods and conducted business deals at agoras, or large-scale marketplaces, between the 13th and fourth centuries B.C.
Since then, we’ve come a long way in how we conduct business, and consequently, the way we manage customer experience (CX). With the advancement of modern technology and a better understanding of the customer-vendor relationship, old CX metrics don’t cut it anymore.
Leaders need to be innovative and on the pulse of new trends, which is why CX metrics are ever evolving. We caught up with two CX leaders to discuss popular CX metrics and how they're viewing them today for their businesses.
Net Promoter Score (NPS): Promoting Siloed Data
When considering the most effective CX metrics, Kim Sayers, content marketing specialist at United World Telecom and a CMSWire Contributor, said Net Promoter Score (NPS) sits low on her list.
According to Sayers, this is due to NPS typically relying on a single question to collect customer data. This can be limiting and often inaccurate because it doesn’t allow for a full scope of a customer’s overall satisfaction and often results in an oversimplification of their experience. As a result, the customer’s underlying concerns and issues go unnoticed.
Even still, Sayers noted that while NPS is a useful (and popular) tool for measuring customer loyalty and advocacy, businesses should implement other CX metrics to provide a more comprehensive view of the customer’s experience. This way, NPS need not be entirely abandoned, but rather used in tandem with methods of gauging customer satisfaction to fully understand how the customer feels.
Brianna Langley Henderson, regional customer experience specialist at Waste Connections and a CMSWire Contributor, agrees that this type of siloed data is hindering and reductive.
“Metrics around human behavior are best served as a casserole, not individual dishes.” Langley Henderson said. “Customers are humans, and humans are complex. So the stories we glean from data must also bring a certain level of complexity if they are to depict an accurate representation of the customer experience.”
It’s important to remember that variables like a consumer’s emotions and the price of what they’re buying must simultaneously be taken into consideration to get a comprehensive snapshot of a customer’s experience. NPS and similar types of data aggregation can be tricky, according to Langley Henderson, and can take years to hone.
This is why some CX leaders find it easier to take the “ingredients” of a successful CX metric and formulate the recipe on their own. They often don’t get all the ratios right the first time around, so building a solid foundation with the right mix of data upfront is ultimately the key to success. However, this takes trial and error.
Related Article: Ecommerce Metrics: Which Numbers Matter the Most?
Customer Satisfaction Score (CSAT): Don't Stick to a Script
Historically, customer service representatives have dealt with the brunt of unhappy customers. Nobody likes being on hold or working out a problem over the phone, and the touchy, tentative nature of some disgruntled customers can be exacerbated by a contact center worker’s adherence to a script.
Furthermore, company evaluations are often based on employees’ ability to stick to their script and complete tasks efficiently, but this approach fails to account for the customer's satisfaction with their experience on the phone. Langley Henderson believes CSAT in contact centers must be prioritized to benefit both the customer and the representative.
Learning Opportunities
“This shift in emphasis allows businesses to measure the actual quality of service they're providing, not just how quickly a process is completed," she says. "In my experience, customer support reps are better able to provide personalized customer service experiences when given the space to operate outside of rigid scripts. This generally leads to higher customer retention rates and greater overall satisfaction with services provided.”
Instead of sticking to a script, it's more beneficial for companies to look at metrics like CSAT and use creative, improvisational problem solving when assessing the performance of their contact center representatives. After all, CSAT in contact centers is often gauged by a post-phone call survey, and according to an article by SQM, customers completing these surveys are not only comparing you to contact center competitors, but to any customer service they’ve ever had. Receiving a higher CSAT score will depend on your representatives’ ability to prioritize the customer’s specific needs, not following a general script.
Customer Effort Score (CES): It Isn't Always Accurate
In the world of CX, customer effort score measures how much effort customers expend in order to solve their problem. For this reason, companies traditionally assume a higher CES score is indicative of dissatisfaction — and while this sometimes rings true, it isn’t always the case. Langley Henderson noted that, throughout her career, some customers are more satisfied when they have to put in extra effort as long as their needs are met.
A company exhibiting one-on-one service shows that they’re not only listening, but treating the customer as an individual and trying to assist with empathy and care. Similar to how contact center workers may adhere to a script, forcing a one-size-fits-all solution can make the customer feel more like a hurdle than a human being.
So while the customer’s effort score may go up, their patience will go down because they don’t feel heard. Inversely, working together with a service representative to troubleshoot and ultimately resolve the problem will also produce a higher CES score, even though the customer is satisfied. This can lead to skewed and inaccurate results.
“For my team, relying solely on CES scores for evaluating customer experience can be misleading.” Langley Henderson said. “And yes, it’s always a hurdle trying to pull and aggregate even more data to get a full picture of what’s actually going on with the customer — but it’s well worth the extra effort.”
Related Article: 4 Ways Brands Go Wrong With Digital Marketing Metrics
Using These CX Metrics Successfully
Customer experience metrics that may appear helpful on paper are not always helpful in practice. NPS, CSAT and CES are three examples of CX metrics that have not been implemented in ways that help companies wholly understand how satisfied their customers are after engaging with their brand. Langley Henderson and Sayers agree metrics that garner siloed data must be taken with a grain of salt — or integrated with other metrics that do what NPS and CES cannot. And in the case of contact center CSAT, a more personalized, human experience is necessary to ensure higher customer satisfaction levels in post-phone call surveys.
In the future, these metrics and others must be either adjusted or integrated with others to ensure a more rounded, complete understanding of a company's CX data. Otherwise, we cannot fully understand the consumer’s point of view.