cft

How to avoid 5 of the biggest mistakes in usability testing

You recruit the wrong test participants


user

Christian Jensen

2 years ago | 10 min read

This article won’t teach you all the theory you need to do usability testing right. It isn’t a how-to guide on preparing, conducting, and analyzing your usability studies. There are plenty of great courses and literature to teach you this.

Start with Nielsen Norman Group’s Usability Testing 101 and their other resources on the topic, and check out the courses offered by the Interaction Design Foundation. I’ve written this article as a companion to these.

Usability testing has been an essential part of my job throughout my career in Service Design and Product Design. I’m not a trained Researcher or the most experienced person in the field, but I’ve had the opportunity to use a range of usability testing methods (including in-person, remote, moderated, and unmoderated testing).

I’ve learned the hard way about a few critical mistakes that most often seem to screw up my own and other Designers’ usability testing, despite having studied the theory.

I hope to highlight the strengths and weaknesses of the different testing methods and give you some concrete tips to help you avoid the wasted time, biases, and false signals that can lead you in the wrong direction.

1. You recruit the wrong test participants

Why it’s a problem

Finding and recruiting participants who represent your target group can be difficult, especially if you don’t have easy access to your users or if you’re targeting a very niche audience. This can lead Designers and Researchers to take the easy way out and compromise on their screening. Except for your pilot test, testing with colleagues, family or friends is not okay.

Failing to recruit the right test participants will at best be a waste of time or give you suboptimal data. At worst, it will lead you down a completely wrong path.

Video here.

Be smart about recruiting participants

Recruit your participants through an online tester panel and do remote unmoderated testing. If the panel can meet your screener criteria, and the participants don’t have to be your actual users, it’s one of the fastest and cheapest ways to recruit. My personal favorite at the moment is TryMyUI.com, or UsabilityHub.com for more basic tests.

Guerrilla-style testing at your local coffee shop is another great method to bypass the hassle of outreach and scheduling. Just remember that you still need to carefully scout for people in your target group. Depending on your screener criteria, the local coffee shop may not be an option.

Setting up a test panel of your own users is an extremely valuable long-term solution, although it won’t solve the recruitment challenge today. I highly recommend getting started, regardless of your target group, product, and finances. I apologize for my bastardization of a beautiful old Chinese proverb, but I think it sums up my point very well:

“The best time to start creating your test panel was 20 years ago. The second best time is now.”

2. You don’t iterate on your test script

Why it’s a problem

Especially when you’re new to usability testing, you won’t nail the test scenario and tasks in your first take. Ensuring that your participants test what you want is hard enough. Doing so without “leading the witness” or giving away the desired result is even harder.

You will learn that your test scenario wasn’t as clear as you thought. Perhaps one of the tasks was misunderstood, or maybe you accidentally used a term from the UI, making the desired click too obvious.

Video here.

The problem is made bigger by the fact that usability tests are often scheduled back-to-back throughout a whole day, with expectations that all tests will be directly comparable and result in quantifiable results. This setup typically doesn’t allow for iterating on your script between tests, which I think is the key to getting it right.

Usability tests can be iterative too

Think of your usability test in the same way as the product you’re testing. We’re used to thinking in versions, perhaps starting with a beta or an MVP, then rolling out a v1 for testing and improving with a small subset of users before a full rollout. Iterate in the beginning of your study before doing the tests that will be measured, compared, and analyzed.

First of all, make sure to run a pilot test with a colleague, friend, or family member.

Think of the pilot test as your beta. It will help you iron out a lot of the kinks before involving your real test participants and curious stakeholders.

Run a few low-cost tests and improve your test script between them. Even if you want to conduct your tests in-person with real users, consider running a quick test with a guest at your local coffee shop, or order a couple of unmoderated tests from an online test panel.

The latter will be a great stress test of your script, as you won’t be able to get a test participant back on track if something is misunderstood or they get lost in the product.

3. You expect representative data from an unnatural situation

Why it’s a problem

Let’s just acknowledge it: A usability test is an unnatural situation. The level of awkwardness depends on the method and your specific setup. With moderated in-person testing, participants are not only performing a set of tasks, they’re doing it while you’re discretely watching over their shoulder.

Furthermore, you’d often conduct this type of usability test in your office, completely removing your test participants from their natural environment.

One of the main challenges of usability testing is that the unnatural setting can lead to unnatural behavior and thus inaccurate or biased data.

It’s important not to bias your users

Your role as the facilitator is extremely important, both because of what you do and what you don’t do. You need to welcome the test user and set the scene, help them feel comfortable, and give them the instructions they need to perform the test — without leading, helping, and biasing your data.

Video here.

You will learn a lot by watching an experienced moderator in action before moderating any tests yourself. If you have one in-house or the budget to hire one, it’s worth considering.

Try remote usability testing. Although the video call, screen sharing, tasks, and moderation can’t be ignored, at least the test participant is free to stay in their natural environment.

Do your testing unmoderated or with retrospective think-aloud to remove the impact of the moderator altogether. It’s a great tool to have in your UX toolkit, but don’t give up on moderated testing just because it’s difficult to get right! A good moderator plays a valuable role in a usability test, so unmoderated testing isn’t without tradeoffs.

Go to your users instead of having your users come to you. Conduct your tests in the environment your users would normally interact with your product, website, or service. Whether it’s in their office, their home, on the subway, or at a coffee shop, the data you get from testing in these environments is a lot more realistic than if you did it in one of your meeting rooms.

If going to your user isn’t an option, try emulating their natural environment in your office. Rather than a white, silent room, keep it a bit more casual. Don’t go full Marie Kondo on your meeting room before inviting in your participants, and let the door be open during the tests to allow for some background noise.

4. You listen more than you observe

Why it’s a problem

Think-aloud is a standard component of usability testing and there’s plenty of value in hearing why your participant clicked a certain button, what they expected to happen, etc. It’s just not enough.

Users describe past behavior and experiences with a product based on flawed memories. They make feature requests without a good understanding of the underlying need or the technical opportunities. They will unknowingly lie to themselves about how easy or difficult something was.

They will become efficient with a poorly designed interface and perceive it as being better than it is. This is why you need to pay attention to what your test participants do, and not (only) what they say.

A few tips for better observation

Observing your test participants is essential no matter which method you choose. However, some interesting signals can get lost in remote testing. Even with video calls and screen sharing, hand gestures, changes in body posture, or an involuntary tapping with the foot can get lost in transmission.

Depending on your internet connection and video quality, even facial expressions can be hard to pick up. Because of this, you might want to do more in-person testing to get into the habit and learn the skill of observing your test participants and their behavior.

It’s valuable to have a dedicated note-taker in addition to the moderator. The note-taker can focus on observing the test participant, what they do on the screen, where they seem to struggle (even if they don’t point it out), and how they react to different situations.

It also lets you, the moderator, give your full attention to the participant. As Flora MacLeod writes in 5 things you should know before running a usability test:

“When you’re talking to someone and they’re looking at their phone, or their smartwatch, you can just tell they’re not paying attention. You want to give up talking because you feel like they don’t think you’re worth listening to.That’s what it’s like when the facilitator takes notes.”

A video recording of the whole session will allow you to analyze and spot even more signals afterward. It will also enable you to get input from your colleagues who may pick up on some things you didn’t see.

Take it one step further and live stream your test sessions to your colleagues in another room. You may have tried this in a Design Sprint. Not only will you get multiple perspectives on each test session — you will also significantly speed up and improve the analysis process, and hopefully avoid the next challenge…

5. You test to “confirm” your ideas

Why it’s a problem

Many Designers are used to dealing with opinions on their work. This is particularly true for the aesthetics of what we do, on which everyone seems to have an opinion. And they’re not always aligned with our own…

Our own opinions are strengthened by the time and brainpower we invest in research, inspiration-seeking, ideation, sketching, and prototyping. We’re likely to develop a preference for one of the potential solutions.

A certain fondness for a specific style of button. A preference for one user flow over another. Perhaps we’ve already had discussions with our colleagues and even picked a favorite and taken a stance.

All this makes us more likely to have an opinion about the best way forward, about what should and shouldn’t work, before we even present the alternatives to our users. This is the root cause of confirmation bias.

We will seek out the data that supports our own opinion while discarding what goes against it, often without even knowing it. This can be a huge problem as it’s ultimately up to our users to show us the best way forward.

Get ahead of confirmation bias

Ideally, you wouldn’t have any strong preferences or an attachment to a specific idea. This is easier said than done though. You may manage to stay neutral in some projects, but you will occasionally have a hidden agenda before a usability test.

The tendency toward confirmation bias is such a fundamental human instinct that it can’t just be turned off. For an introduction to human biases, I recommend ‘You Are Not So Smart’ by David McRaney.

Something I’ve found useful is bringing this hidden agenda into the open before going into a usability study. In addition to defining what to test, write down your assumptions and what you “hope” will happen. Be honest. And make your teammates and other relevant stakeholders do the same.

It’s better to state and discuss these perspectives beforehand than to secretly try to find support for your ideas later. Check yourself and each other in the analysis process to make sure you’re not just arguing your own case.

Remember that one test isn’t enough for you to make a conclusion, especially if the potential conclusion supports your own opinion.

Additionally, get multiple perspectives on your usability tests. Don’t rely on your own judgment of a usability test when you know you’re likely to be biased. Let others watch the sessions. Do your qualitative analysis together. Let everyone judge for themselves before you try to steer the conversation in your desired direction. Are you all seeing and hearing the same things?

If possible, have someone else test your design. While this certainly alleviates the problem of confirmation bias, it’s not always an option for small Design teams or solo-Designers.

Key takeaways

Usability testing is an amazing tool for all Product Designers and should be embraced as early and often as possible in a project. However, poorly conducted usability tests can sometimes be worse than no usability tests at all.

Iterate to improve your test script, and prioritize recruiting the right test participants. Do your best to make your participants feel comfortable in an unnatural situation, and make sure to observe what they do and not just listen to what they say.

Finally, approach your usability tests with an open mind to avoid letting your confirmation bias screw up your data and lead you down the wrong path.

As highlighted in this article, each testing method has its strengths and weaknesses. Learn and practice the different methods and use a combination of them for the best results.

Happy testing! 🚀

Originally published at techmoneyculture.com on June 14, 2020.

Upvote


user
Created by

Christian Jensen

UX Designer, investor, and NFT nerd, writing about innovation, investing, product design, and culture ✍️


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles