Common UX metric mistakes you might be making
What to avoid when building a great UX metrics program
Raise your hand if you reflexively dismiss pop-ups when you’re working on a task. I know I do — I generally dismiss things without thinking, lazer focused on completing my task at hand.
Now raise your hand if you’ve sent out a UX metrics survey using pop-ups. Again, I’ve done this. Hoping to catch a user in the right moment, I’ve set up in-context surveys to collect their thoughts, experiences, and feedback. While this isn’t inherently wrong, things have become saturated as tons of companies continuously solicit feedback.
As UX practitioners, we need to practice what we preach. While our goal may be to collect UX metrics in order to improve users’ experience, without careful thought, our metrics can quickly devolve into a spammy annoyance.
As I’ve helped build out metrics programs at a variety of startups, I’ve noticed some common pitfalls to avoid.
Mistake #1: Continuously interrupting our users
While collecting timely and in-context feedback is the gold standard, we need to balance our goals with our users’ goals. People visit your site or log into your tool to complete a task or fulfill a need. I don’t open GrubHub to complete an NPS survey, I open to to quell my hunger. I don’t log into American Airlines to write them a review, I log in to check my flight information.
While UX metrics are valuable, we don’t want to regularly interrupt users’ important tasks with a Customer Effort Score pop-up or Satisfaction survey. Regularly interrupting our users will spur them to reflexively dismiss our prompts or worse — we’ll undercut the seamless experience we’ve tried so hard to create.
What to do instead
Keep your users’ goals and mindset front and center. Be empathic and user-focused so you don’t undercut their experience. Are they in a rush? Do they need to complete a critical task? If so, you should probably avoid a pop-up, or you should at least carefully consider the timing and trigger.
Consider which format (or combination) might be the best fit. Based on their mindset and goals, identify the least intrusive medium. Perhaps an email or notification would allow them sufficient time, space, and thought to respond. Maybe a banner would be less disruptive. Or maybe a pop-up after they complete an important task, instead of before or in the middle of it would be more appropriate.
Mistake #2: Oversampling
In order to build out a comprehensive UX metrics program, it’s important to continuously collect feedback over time. However you have to be considerate of your users and their time. Asking someone to provide an NPS rating every month is a sure way to create a detractor.
Additionally, it’s best practice to collect a variety of UX metrics and triage them to gain a more holistic and nuanced understanding of your product. However you need to be mindful to not barrage the same users with several surveys in a short amount of time.
What to do instead
Use random sampling if you have sufficient numbers of users. Instead of sending NPS out to all of your users each quarter, send it to a subset. Then come next quarter, you can send it to a different subset.
This way you have a random sample each quarter, but you’ll be more likely to get responses since you’re asking people less frequently. Depending on your customer base and the number of metrics you’re trying to measure, you can determine the size of your subsets. But avoid surveying people more than once per quarter.
Coordinate between internal cross-functional groups. While UX may own Customer Effort Score or System Usability Score, Customer Success may own Net Promoter Score. Sufficient internal communication and coordination is necessary to avoid barraging users. Ideally there is a gatekeeper or coordinating committee to streamline efforts and reduce oversampling.
Mistake #3: Failing to understand the reasoning behind ratings
Many executives place more weight on quantitative data, perhaps soothed by their clear cut nature. However, decontextualized numbers are ripe for misinterpretation. While scales feel objective, in reality people can interpret them differently. American are known for being quite enthusiastic and generous in their scores, while European tend to be more restrained.
Additionally, the everyday person might interpret metrics questions differently than we intend. At one company, countless people responded to our NPS survey in utter bewilderment, genuinely wondering who talks about enterprise software with their family and friends.
What to do instead
Follow up your rating question with “Why did you give us this score?”. While a decontextualized score doesn’t tell us much on it’s own, it becomes much more informative when paired with respondents’ explicit reasoning.
Perhaps someone had an excellent experience but gave a low NPS score since they didn’t know anyone else looking for a similar tool. Or perhaps someone struggled to complete a certain task in the product, but overall likes the platform so gave a high CSAT score. Understanding the why behind the rating is key.
Triage metrics data with other user data. It’s best practice to collect multiple UX metrics and triage them with user interviews, sales feedback, help tickets, product analytics, and other valuable data. This will provide a much more robust and accurate picture of your product and the overall user experience.
Mistake #4: Collecting overly broad and inactionable feedback
While it makes sense to collect an overarching UX metric, such as a blanket NPS for the company, this data should be supplemented with additional information. Collecting UX metrics isn’t enough, you need to learn from them and use them to improve your product.
For example, if I collect my entire product’s NPS, I don’t necessarily know which levers I should focus on to improve my score. Taken alone, it’s an abstract measure without actionable insight. This is where “Why did you give us this score?” can provide additional insight.
What to do instead
- Spend time up front crafting a thoughtful and strategic approach. Which metrics will be most relevant, valuable, and actionable for you? What should your key triggers be — what are the most important touchpoints and interactions? How frequently will you collect each measure? Who do you want to respond?
- Develop an analysis framework and action plan. Instead of simply analyzing feedback in aggregate, consider how you might get more granular. Perhaps it would be useful to break out feedback by roles, industries, experience level, locations, etc. This could provide more nuanced and actionable insights. Additionally, determine how you will review and act on the data as a company. Will an individual or committee review it? How often? Who will be accountable for integrating the feedback into the roadmap and product?
Despite the best of intentions, slapping together various in-product survey pop-ups generally isn’t the best approach. While they aren’t inherently bad, indiscriminately surveying users can lead to a terrible UX.
Instead, take a step back and develop a thoughtful UX metrics framework and action plan. Which metrics and data will you collect? Where, when, and how will you collect data? How will you review and act on the data? Only after establishing these strategic foundations can you begin to build a valuable and actionable metrics program.
As a User Researcher and Strategist, I help companies solve the right problems and build more relevant, efficient, and intuitive products. I started my UX career at a Fortune 500 company, and I've since helped established the research practice at three B2B startups. I'm currently a Senior User Researcher at Unqork, the leading enterprise no code platform.