woman testing new app

Usability Testing: 7 Metrics to Assess Ease of Use

7 minute read
Tobias Komischke avatar
Usability testing shows how well a solution meets people's needs and highlights areas for improvement. Here are seven areas to watch.

An essential part of a user-centered design process is assessing a product or service's ease of use. It helps an organization understand how well a solution fits user needs and highlights areas for improvement. Empirical studies like usability testing yield insights directly derived from user behavior and feedback.

During a usability test, users engage with a solution and solve test tasks while a facilitator observes them and gathers their feedback. While oftentimes considered a purely qualitative exercise, it is possible to record, analyze and synthesize quantitative measures as well. Here are some of the most widely used usability metrics.

Usability Metric #1: Task Success

Task success measures effectiveness: to what degree can users successfully complete a given task? Users struggling to understand how a solution works, what actions to take, and how to advance from start to finish are clear signs the usability of the tested solution is not optimal.

Task success can be measured in more than one way. Binary task success only measures if the task was completed or not. We can show the results per user in a chart, or display the completion percentage of all tested users for each task, as seen below.

task success binary

The above visualization shows that the only tasks every user successfully completed was one and five. Only half of the users could finish task four. These results lead to the conclusion that the solution doesn't support its users in completing their objectives in an optimal way. Task four should be given particular attention, analyzing what users were doing, what they said and what blocked their path to success.

For more nuanced results and insights, a test can define levels of task success. For example, use a four-level scale to rate task completion such as: no problems faced; minor problems faced; major problems faced; failed. Note: reach an agreement on what user behavior constitutes each of the levels, especially if several people analyze the test results. The outcome of a level-based task success metric will look like this:

task success levels

Related Article: User Testing Belongs in the UX Process: Here's Why

Usability Metric #2: Number of Errors

This effectiveness metric reports the volume of observable incorrect user actions, i.e. any action which moves the user off-track from their task completion. Defining what an error is can be challenging. Typically, we talk about mistakes like choosing a wrong menu or accidentally clicking on the wrong link that is located near the right one. While all humans make errors, if we identify spikes in certain tasks, we can investigate how to improve the design to eliminate these error-prone conditions. To display the results of an error report, we can show the number of errors or the error rates per user, per task or as averages across users and tasks.

number of errors

Usability Metric #3: Number of Help Requests 

Number of help requests is very similar to the number of errors metric. In this case, the metric represents users' uncertainty when carrying out tasks. Whenever a user asks a question like: “I guess I should click here now, right?” or “What should I do now?” they are stuck on their journey. The conclusion is the solution is not self-descriptive enough in that moment to indicate the next action and therefore needs to be re-worked. We can chart this metric the same way we did for number of errors.

number of times help requested

Related Article: Why User Testing Isn't a Nice-to-Have, It's a Must-Have

Usability Metric #4: Number of User Actions

This metric expresses efficiency. What effort do users have to put forth to accomplish their goals, i.e. complete their test tasks? As a rule of thumb, the lower the effort, the higher the usability, and the better the user experience. We can gauge the effort by counting the number of observable actions that users take during their task completion journey. We know how many steps are needed from start to finish. If we now see that users require many more steps than what is necessary, or we see that some users utilize significantly more steps than others, it means that our solution does not support these users as well as it should. We can report the results per task or across tasks, per user or across users.

number of actions

Usability Metric #5: Time on Task

Task execution time is another indicator of the effort it takes to carry out a tasks. By noting the time a task is started and then completed, we can compare the time spent between tasks and between users. If some users carry out their tasks slower than others, we can explore what aspects of the solution didn’t support them properly during their journey. Remember not to interpret the time as representative of the actual time a user in the real world would take for the task, because in the test situation we want the users to tell us what they are doing and thinking — which takes time. A visualization of the results may look like this:

Learning Opportunities

time on task

Usability Metric #6: Learnability

Learnability is a special flavor of "Time on Task." When we ask the same test users to carry out the same tasks repeatedly, we can trace how much faster they can complete them in each trial. The time between trials can vary — from minutes to days or even weeks — depending on our focus. We can visualize the results in charts like this one:


Here we see the task execution times generally going down over three trials. This is to be expected. Yet the decrease — the learnability — was higher for some tasks than others. Why is that? What aspects of the solution make it hard for users to understand and remember how to proceed? As mentioned above, because test users are asked to narrate their actions and thoughts during the test, which increases their task execution times, the relative difference between the trials is what's important, not the actual time in seconds.

Following the same approach, we can express learnability by any of the other metrics above, as long as we gather their values over several trials. If the learning effect is high, task success should increase, errors should decrease, help requests should decrease, and number of user actions should decrease.

Usability Metric #7: Satisfaction

rate your satisfaction
So far, all metrics have been based on observations. In addition, we can ask users to self-report their satisfaction with the way the solution allowed them to complete a task. After each task we can use a Customer Satisfaction (CSAT) survey which is typically based on a single item: “Rate your satisfaction with <product/service name>” alongside a Likert scale for answering.

We can then show the results per task, across tasks, per user or across users in a chart like this:

satisfaction ratings - an important point in usability testing

If satisfaction with the solution is low for one task, then any part of the solution used during the completion of that task is ripe for improvement.

Taking Action on Usability Testing Results

Now that we have a set of metrics, what do we do with them? We can use them to succinctly communicate different aspects of usability and UX to stakeholders. Keep in mind, however, that user experience is more than the sum of these measures. We can also use them to pinpoint areas of the solution that need improvement. We can also use our test results to track the performance of our solution over time: Are we improving between releases or do these numbers stay the same or even decrease? We can use them to compare against targets that we set and to compare our solution against competitor solutions.

As Lord Kelvin said, “To measure is to know.”  

About the author

Tobias Komischke

Tobias Komischke, PhD, is a UX Fellow at Infragistics, where he serves as head of the company’s Innovation Lab. He leads data analytics, artificial intelligence and machine learning initiatives for its emerging software applications, including Indigo.Design and Slingshot.

About CMSWire

For nearly two decades CMSWire, produced by Simpler Media Group, has been the world's leading community of customer experience professionals.


Today the CMSWire community consists of over 5 million influential customer experience, digital experience and customer service leaders, the majority of whom are based in North America and employed by medium to large organizations. Our sister community, Reworked gathers the world's leading employee experience and digital workplace professionals.

Join the Community

Get the CMSWire Mobile App

Download App Store
Download google play