The Challenges and Opportunities of Measuring the User Experience

Invited Essay

pp. 1-7

No PDF available for download.

Not too long ago it seemed the biggest challenge facing companies was justifying any investment in usability and the user experience (Bias & Mayhew, 2005). While that has not entirely gone away, increasingly I see organizations facing a new challenge. Instead of worrying about how to justify any investment in the user experience, the shift is on how to better manage the investment in user experience.

Organizations big and small increasingly want to measure and manage their efforts with numbers. This applies to all aspects of a business environment, from office supplies to marketing and advertising, so it’s no surprise that numbers are assuming a growing role in assessing the user experience. This movement toward quantifying the user experience can allow for more precise decisions and provide evidence to support management efforts.

You can’t manage what you can’t measure. That’s hardly a revolutionary concept in the world of business management. But numbers alone aren’t a panacea. If you’re measuring the wrong thing or if the numbers are divorced from reality, you can end up making a precise but very bad decision.

Even if organizations embrace the idea of measuring the user experience, they face considerable challenges: where to start, what to measure, how to obtain enough qualified participants, what methods to use, and who owns the initiative? In this editorial I’ll discuss some challenges and opportunities I’ve seen and some ideas for helping to make the measurement process a better experience.

Not Knowing Where to Start

A common challenge for organizations that are ready to embrace a more structured approach to improving the user experience is knowing where to start. There’s a tendency to jump right into counting the number of usability problems identified, tracking the number of customers observed, conducting usability tests, and assessing the costs spent on upgrading lab equipment. While these activities may play a role in your measurement strategy, it’s best to start with a plan to make the most of your efforts. Starting from the top and working your way down will help to separate inputs from outputs.

Create a Strategy

A good plan will help answer many of the questions around the best methods and metrics to achieve your goals. In creating a plan, you should start with the big picture (a top-down approach) by identifying how your company measures success. Just like business management in general, you want to optimize your efforts around things that matter. By keeping an eye toward these success measures, you will produce research outcomes that relate to meaningful dimensions.

You’ll then want to associate UX metrics and activities to these company success metrics. With this quantitative chain of accountability you’re able to better prioritize your efforts and even calculate a return on investment. In theory, this top-down approach probably makes sense; in practice, the devil is in the details. Here are some ways to get started with your plan.

1. Define the KPIs

Every company has Key Performance Indicators (KPIs)—metrics that can make or break products and promotions. While revenue is usually the ultimate metric that the company brass and shareholders track, it’s a lagging indicator (you can’t do anything about last quarter’s numbers!).

For that reason, you need to track other metrics that give a clue about the revenue in a more timely way. Other than revenue, common metrics tied to organizational success often include behavioral metrics—registrations and repeat purchases—as well as attitudinal ones—customer satisfaction, likelihood to repurchase, and likelihood to recommend.

One of the reasons many organizations use the Net Promoter Score (NPS) as a key success metric is that it’s been shown to loosely associate with revenue growth (Reichheld, 2003). While the NPS is far from “the only number you need to grow,” it has the benefit of broadly evaluating what’s meaningful to the customer and is a leading indicator of subsequent revenue.

2. Benchmark Perceptions of the User Experience

With the KPIs identified, you’ll next want to be sure you are able to access or collect information on these indicators, along with some measure of the users’ perception of the quality of their experience. The user experience is of course not just what users perceive; it’s also what they do. To create the chain of accountability from pixels to profits, you’ll need to understand the relationship between what a customer thinks and does.

It’s become a bit of a platitude in UX circles that what users say and what they do aren’t always the same (Nielsen, 2001). However, attitudes (what customers think) generally do affect actions (what customers do) and vice versa: actions affect attitudes—but not always in easily predictable ways.

But don’t look at this problem as a reason not to measure. Look at it as a manageable challenge. You need to systematically measure both attitudes and actions, but you can start with measuring attitudes because they’re generally easier to collect in a systematic way using standardized questionnaires and surveys.

For measuring the perception of UX quality for websites, you can use the SUPR-Q (Sauro, 2015). For software, you can use a combination of the SUS (Brooke, 1996), TAM (Davis, 1989), UMUX-Lite (Lewis, Utesch, & Maher, 2013), or ideally some psychometrically validated instrument that addresses constructs like usefulness, ease of use, and productivity.

Many organizations already have annual customer surveys where they collect KPIs like customer satisfaction and NPS. Leverage these efforts by inserting standardized measures of UX quality in the same survey. Again, measuring perceptions of the user experience isn’t the same thing as measuring user experience behavior, but it’s half of the equation—and a great place to start. Use this same study to collect names and contact information of customers who would be willing to participate in a follow-up study.

3. Associate UX Measures to KPIs

With a measure of UX quality and KPIs from the same customer, you should seek to understand the mathematical relationship between the two. For example, we’ve often seen strong correlations between NPS and SUS (Lewis, 2012), where attitudes toward usability explain between 30% and 50% of customers’ likelihood to recommend. In other words, if a user rates usability as high, they’re much more likely to recommend it to others.

Creating the association involves statistical techniques like multiple-regression analysis to understand how much impact each aspect of the user experience has on KPIs. But even without these more sophisticated statistical techniques you can examine and code verbatim comments to open ended questions and link these to UX measures. For example, we’ve been able to identify that user comments about poor navigation on some websites were strongly associated with lower Net Promoter Scores. This suggested that the cause of negative word-of-mouth was in part driven by poor navigation.

If the KPIs you are measuring are actual revenue, purchases, cancelations, or some other behavior, link this behavioral data to the attitudinal data of the perceptions of the UX quality from the same customer. It’s important to have that linkage; otherwise you won’t be able to fully understand how changes in UX attitudes affect the company’s KPIs.

This is one of the benefits of a top-down measurement approach. You let the data help guide your decision on what’s important instead of relying on the loudest executive or angriest customer (although you can’t ignore them either).

Once you understand the linkage between users’ thoughts, behaviors, and their relationship to KPIs, you can use common methods to improve the user experience: defining tasks, users, and metrics and then measuring before and after making changes.

4. Identify and Track Top Tasks by Product Areas

In my experience most organizations either have a pretty good handle on who their users are or know how to collect the data through personas and customer segmentation efforts. There’s usually more of a gap in understanding what users are trying to do with a product or website.

While products and websites can have hundreds or thousands of functions, usually a small fraction of that functionality is used most often. It’s this minority of tasks that determine how customers use and recommend a product. Understanding the strengths and weaknesses of these top tasks can be an efficient way to improve the user experience. A good approach to identifying the most commonly used tasks for each product or app is to conduct a top-task analysis. The somewhat unorthodox technique is explained by Gerry McGovern in his book, The Stranger’s Long Neck (2010).

To conduct a top-task analysis, enumerate all the tasks in the users‘ language, and identify features, content, and functionality you want them to consider. Avoid internal jargon as much as possible and have it at a level of granularity that is both actionable and that the user can relate to. The important part of a top-task analysis is that participants can only pick a limited number of tasks that they find most essential (usually five).

If you have trouble conducting a separate survey solely for this purpose, include a top-task question in customer surveys when you collect the KPIs and measures of UX quality. It usually adds only 3 to 4 minutes to the total survey time.

5. Benchmark the User Experience

Once you know what the top tasks are, you’ll want to implement a benchmarking program to get a regular pulse on the user experience.

Benchmarking an experience (and not just the perception of the experience) through usability tasks allows you to identify at a more granular level how the experience is performing over time, what the problems are, and where you likely need to improve things.

A lot goes into creating a successful benchmarking program. You want to do this right as it becomes a future comparison point to gauge the success of your planning efforts. This means you need to

  • Define the tasks and users: Be sure you have the right tasks and users identified. Use the results of the top-task analysis and data you have on who your users are to be sure the sample of people you collect matches your user population.
  • Use multiple metrics: You need to collect a mix of perception and performance metrics (Sauro & Lewis, 2009). You’ll want to collect measures at the task level (completion rates, time, and perceived difficulty) and at the study level, like the SUPR-Q (Sauro, 2015), SUS (Brooke, 1996), or UMUX-lite (Lewis, et. al, 2013). These study perception measures should be the same ones you collect in the survey of perceptions (Step 2), except they are collected in a different context—so the actual scores won’t necessarily match.
  • Have a sufficient sample size: You need to be able differentiate between real findings and random noise (Sauro & Lewis, 2016). Finding enough of the right participants can be a challenge, especially for organizations that have specialized users. One effective way to know you’re testing the right users is to use the names you collected in the perception benchmark (Step 2). You’ll also want to supplement this with other efforts, such as using outside panel agencies or your own recruiting.
  • Collect both metrics and problems: While it often makes sense to distinguish between formative (diagnosing) and summative (evaluating) evaluations, in practice a good benchmark test mixes the two (Scriven, 1967). While the emphasis is on collecting metrics at the task and test level to assess the experience, you can make the most of the budget and time by also recording what problems in the interface are leading to poor metrics (e.g., long completion times, low ratings, and low completion rates).

If you have multiple products and teams, you’ll want to benchmark each product and functional area. This gives you additional points of comparison and more granular data to examine, but also presents another challenge of resources and budget.

6. Plan Improvements

The point of measuring is to make the user experience better, not just document it. Now that you’ve gathered information about where the user experience issues are, you’ll need a plan to improve them. While the best plans to improve the UX will be based on the context and specific issues identified in benchmarking, there are some core principles you’ll want to stick to. For example, use multiple methods, iterate early and often, and measure each phase with a core set of metrics. This plan will likely be more tactical as you identify the root causes of the problems and then test iteratively to determine whether design changes are moving the needle.

7. Find the Right Mix of Methods

There’s an unfortunate perception that measuring means crowding out the more qualitative-focused methods many UX researchers rely on, and developing a good plan to improve the user experience must mean measuring the outcome of the efforts quantitatively. While it can take years to identify which research method best addresses your goals, it will likely involve some combination of qualitative and quantitative approaches. Using a mix of both qualitative and quantitative methods is necessary to identify user needs; evaluate the interface against those needs and generate improvements.

Employing both qualitative and quantitative methods is neither new nor controversial. In fact, there’s a journal dedicated to mixed-method research, aptly named, The Journal of Mixed Method Research, and it can provide a lot of guidance on how to integrate qualitative and quantitative methods. Broadly, the right method will depend on the goals and stage of the research, from more generative research such as a contextual inquiry (understanding customer problems and goals) and card sorting (how people perceive labels and phrases), to more evaluative methods such as tree-testing (how people browse for products) and usability testing (what problems users encounter while attempting tasks). Either way, results from qualitative and quantitative research lend support and depth to each other.

8. Understand How Changes in Designs Improve the KPIs

Once changes have been implemented and there’s a measurably better user experience, you should see an impact on the more high-level perceptions of UX quality. You’ll want to compare KPIs over time and see what actions are having an impact and where you need to course correct. You can use some simple statistical comparisons to differentiate real movement from sampling error as well as more advanced techniques like regression analysis to help understand which causes are driving each effect.

9. Compute the ROI

Now that you’ve made changes and showed how they have improved the user experience and the company KPIs, use that linkage to compute the return on investment (ROI).

ROI calculations often get an “eye roll” from management because they are often stated in vague terms based on potential time saved and the benefits of discovering problems earlier versus later; these are all based on studies from decades ago (Rosenberg, 2004). You can make a much more compelling case for the efficacy of UX budgets by showing how changes in your design moved the corporate metrics. The closer your metrics are to actual company revenue, the more compelling your ROI numbers are.

10. Perform a Periodic UX Audit

Even healthy people and fancy sports cars need periodic checkups to make sure things are in order. Incorporate periodic checkpoints to be sure you have the right people and processes in place to ensure the right methods and metrics are being collected. Markets and customer needs change over time. You’ll want to be sure you’re properly aligned to the company and customer. In practice, this means instituting a regular evaluation schedule (such as annual benchmarking) where you compare metrics over time and identify problems in the experience that are dragging down your measures. KPIs also change as company strategies shift; be sure you’re periodically realigning your metrics and methods to these goals.

Isn’t That the Job Of Market Research?

Even the best laid plans are bound to fail if there isn’t buy-in from the stakeholders in an organization. Most organizations have distinct groups for marketing and user experience research. And not only are they separate departments, they usually have separate methods and mindsets.

Often, planning measurement initiatives and working with KPIs puts UX research activities right in line with much of what marketing or market research does, which can lead to confusion about overlapping goals.

The two functions are often delineated something like the following:

  • Marketing does the quantitative; UX the qualitative.
  • Marketing measures branding and satisfaction; UX observes behavior.
  • Marketing does segmentation; UX manages personas.
  • Marketing concerns themselves with what customers say; UX is concerned with what customers do.

This often results in some turf wars on who owns what method and who „owns“ the customer data. Be prepared for this inevitable collision by leveraging the strengths of each organization to help craft the measurement plan.

The successful researcher, regardless of title, should understand how to combine traditional market research and UX research activities for the best results. The goal of any company is to create a customer (Drucker, 1954). And by extension, the goal of customer (or user) research is to better understand who a customer is and deliver products and services that meet their needs—which should help keep that customer too!

This doesn’t mean that UX researchers should start planning trade-show events, marketing campaigns, or the next conjoint analysis. It also doesn’t mean the market research professional should start running usability tests and performing heuristic evaluations. But it does mean that if you’re in either role, you should understand the tools and techniques that help define what customers think and what they do—and that means blending methods and mindsets.

In principle this means you can

  • Mix quantitative and qualitative. They are complementary, not competing, methods.
  • Use surveys and observation to answer the same research questions.
  • Understand how the user experience affects brand attitudes and vice versa.
  • Measure what people think and what they do, often in the same study.

Conclusion

Measuring the user experience is full of challenges on what to measure, how to measure it, what the goals of measurement should be, and how to work with other teams that may have similar objectives. As an organization matures and moves beyond questioning the value of investing in the user experience, having a plan to make the most of the investment is a critical next step. The best plans will be organized around the goal of linking user research metrics to how a company defines success through Key Performance Indicators (KPIs). This requires using the right methods to understand user behavior and perceptions and to identify the tasks that are most important to them. Implementing a regular benchmarking effort to identify the root causes of problems that have the biggest impact on the KPIs is an important step as user needs evolve.

References

Bias, R. G., & Mayhew, D. J. (2005). Cost-justifying usability: An update for the Internet Age (2nd ed.). Burlington, MA: Morgan Kaufmann.

Brooke, J. (1996). SUS: A “quick and dirty” usability scale. In P. Jordan, B. Thomas, B. Weerdmeester (Eds.), Usability Evaluation in Industry (pp. 189–194). London, UK: Taylor & Francis.

Davis, D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–339.

Drucker, P. (1954). The practice of management. New York, NY: Harper & Row.

Lewis, J. R. (2012). Predicting Net Promoter scores from System Usability Scale scores. Retrieved August 4, 2016 from www.measuringu.com/blog/nps-sus.php.

Lewis, J., Utesch, B., & Maher, D. (2013). UMUX-LITE: When there’s no time for the SUS. In Proceedings of the Conference in Human Factors in Computing Systems (CHI 2013; pp. 2099–2102). New York, NY: ACM.

McGovern, G. (2010). The stranger’s long neck. London, United Kingdom: A&C Black.

Nielsen, J. (2001). First rule of usability? Don’t listen to users. Retrieved August 14, 2016 from www.nngroup.com/articles/first-rule-of-usability-dont-listen-to-users/.

Reichheld, F. F. (2003). The one number you need to grow. Harvard Business Review, 81, 46–54.

Rosenberg, D., (2004, September). The myths of usability ROI. Interactions, 11(5), 22–29.

Sauro, J. (2015). SUPR-Q: A comprehensive measure of the quality of the website user experience. Journal of Usability Studies, 10(2), 68–86.

Sauro, J., & Lewis, J. R. (2009). Correlations among prototypical usability metrics: Evidence for the construct of usability. In Proceedings of CHI 2009 (pp. 1609–1618). Boston, MA: Association for Computing Machinery.

Sauro, J., & Lewis, J. R. (2016). Quantifying the user experience: Practical statistics for user research (2nd ed.). Burlington, MA: Morgan Kaufmann.

Scriven, M. (1967). The methodology of evaluation. In R. E. Stake, Curriculum evaluation. Chicago, IL: Rand McNally. American Educational Research Association (monograph series on evaluation, no. 1).