measuring tape painted on a sidewalk
PHOTO: Steve Harris

Once a product, service or experience has been released to the public, it's time to gather feedback from the potential and current customers using it. The best way to do this is to plan, execute and interpret proper customer experience (CX) and user experience (UX) research. While this sounds obvious, so many companies fail to do this.

What typically happens is that CX/UX is cut from the project to save budget, and marketing will collect and interpret data. If we’re unlucky, CX/UX is cut from the project, marketing isn’t doing much, and we wait until customer service notices a lot of problems. This slow, reactive approach increases business risk, and isn’t the right way to go.

Marketing often looks at activity and action metrics such as time on a page, number of leads, number of sign-ups, and other quantitative metrics that can be compared to our goals. However, 15 new leads this month, 5 minutes spent on a page, and $25,000 in revenue this week don’t really tell us what’s going right or wrong with our latest product release.

That doesn’t mean marketing is doing something wrong or should be excluded. It means that marketing and CX/UX should partner at all stages: determining what will be measured, success criteria and then what we need to learn after the release. This planning can include other departments as well since we would want to know what each is trying to measure. What does engineering need to measure? Does sales need to measure something related to this release? Which of these things can be measured with passive data collection, and which will be measured from CX/UX doing research such as customer interviews and observational studies?

And most importantly, how do we determine accountability for the entire team? If the project does well (based on metrics and success criteria), how do we make sure all of the disciplines get credit? If the project does poorly or fails in some way, how do we make sure the whole team is held accountable? Perhaps the failure ends up being UX’s fault, but if the whole team were equally held accountable for failure, then perhaps the whole team would fight for what UX needs to do a better job.

Flaws in Surveys

One of your project teammates is likely to suggest we survey customers who are using the new feature to learn what they think of it. While non-CX/UX departments usually love surveys, CX/UX rarely use surveys for the following reasons:

  • CX/UX doesn’t focus much on user demographics. We typically don’t design differently for men vs. women, married vs. unmarried, homeowner vs. non-homeowner, etc. We design for how our customer base tends to think and behave, and by what would be compelling and meaningful to them. These details are not learned through surveys.
  • Multiple choice questions box people into an answer even if none of the provided answers feel like a match. Sometimes the participant wants to choose multiple answers but can only choose one.
  • People taking surveys from phones might not want to answer longer, open-ended essay questions.
  • People self-assess poorly. They are not good at predicting what they might do in the future. They don’t always remember the past accurately. They might tell you they’d pay for a feature that did X. But down the road, when X is released, they might have changed their mind or decided it’s not worth the money.
  • Surveys can lead to meaningless vanity metrics that don’t solve problems. For example, your customer support might want to survey people on their experiences solely with the customer support team at (let’s say) an American health insurance company. The customer may be very excited to tell this company how much they hate their health coverage, cost, policies, website, forms and everything but the customer support. If people like the customer support but hate everything else about dealing with your company, congratulations on your nice customer support. But your company is still very broken. Vanity metrics of “people are happy with our call center” will only go so far when customers’ dominant perceptions of your brand are negative.
  • Survey responses are often too incomplete or generalized to help us design something better.

Without deeper research, survey data often isn’t actionable. If your net promoter score (NPS) comes out around 4, that’s bad news, but what’s going wrong and what can we do? We might not have any idea. Surveys don’t give us deep information on the real problems. They can signal a need for more research into what’s going wrong. But on their own, they are too-often flawed, incomplete, or skewed to give CX/UX actionable information.

Related Article: Not Another @#$&! Survey ...

Budget for Ongoing Research

Many teams love to cut their CX/UX teammate as soon as that last wireframe or prototype has been delivered on the assumption that their work is done. When done correctly, our work is never done. If you believe in any variation of the learn-build-test cyclical model, then you are always working to learn more so that it can inform design and then be tested. If you believe in any variation of agile, where manifesto principle No. 1 is that our highest priority is customer satisfaction, then we must improve on and invest in how we research their satisfaction.

If you want to minimize business risk and accurately determine potential and current users’ reactions to your new release, you will need to plan, budget and approve headcount for CX/UX to be actively researching and monitoring customers and their actions.

Related Article: Pinpointing the Magic CX Metric That Drives Growth