cft

10 Tips for Conducting a Usability Test

Your new website design was well-received by colleagues in your latest design critique. But how does your design perform with your target audience? Running a usability test is a great way to validate your design with real users. Testing with real users allows you to gather data needed to identify usability issues, improve your design and ensure it’s easy to use.


user

Jenna Kreiss

2 years ago | 8 min read

Your new website design was well-received by colleagues in your latest design critique. But how does your design perform with your target audience? Running a usability test is a great way to validate your design with real users. Testing with real users allows you to gather data needed to identify usability issues, improve your design and ensure it’s easy to use.

As a user experience researcher, here are 10 tips that I use when conducting a usability test.


1. Define the Goals of Your Test

Before you start developing your usability test, you need to collaborate with project stakeholders to define the overarching goals of the research project. Answering the following questions help you define and solidify the main goals of your research:

  • What is being tested?
  • What is the business case for this research?
  • What is the objective of this usability test?
  • What is the hypothesis?

2. Recruit the Right Participants

One of the most important parts of conducting a usability test is recruiting test participants that are representative of your target audience. This ensures the feedback you get is closest to an actual user experience. To determine your test participants, consider any personas you may have. What age range, location, income, employment status, or language align with your personas? You should also think about web expertise, and the types of devices, browsers, and operating systems test participants will use. In some cases, you may need to get more granular with your test requirements and have participants adhere to specific use cases. You can accomplish this by using screeners, which are pre-test questions that determine if a participant is a good fit for your usability test.

Here’s an example:

1. How do you commute to your job?

a. Public transportation [Reject]

b. Personal vehicle [Accept]

c. Bike [Reject]

d. No commute, I work from home [Reject]

You can set up acceptance criteria for each screener question. Depending on the answer the participant gives they are either accepted or rejected for participation in the usability test. Using screener questions is a great way to ensure you’re getting the right representative users for your research.

3. Craft a Solid Scenario

The scenario sets the stage and communicates the context of what the participant will be doing during the usability test and why they will be doing it. The participant is adopting the role of the person in the scenario, so create a story narrative that will help the participant connect to your scenario.

Things to keep in mind while crafting a scenario:

  • Keep it short — Provide just enough information to set them up for the tasks. If the scenario is too long and wordy, it can cause user fatigue and will be hard to remember.
  • Use “user” language — Providing users with enough detail is important, but the scenario should be in written in a way that the user can relate to. Use a language, tone, and style the participant can recognize and understand. Avoid product language, as well as technical and industry jargon.
  • Keep it simple — Ensure there is no ambiguity with understanding the scenario. Avoid using abbreviations, be explicit, and don’t assume that the participant knows what you’re talking about.
  • Address tasks and concerns — Every scenario should address one or more tasks and each of those tasks should be intended to address one or more concerns you have with the product.

Here’s an example of a solid scenario:

“You are traveling to Seattle for a business trip next week. You need to figure out the amount of money you can be reimbursed for meals and other travel expenses.”

4. Make the Tasks Realistic

Usability test tasks need to accurately and adequately reflect the goals of the research project. Tasks also provide instructions about what participants need to do. Creating good tasks are essential to having results that produce accurate and actionable findings.

A few things to keep in mind while creating tasks:

  • Make them actionable — You must ensure the participant is able to complete the task at hand. Double-check that the workflow makes sense and that your test prototype is functioning correctly.
  • Use “user” language — Again, use a language, tone, and style the participant can relate to and understand. Avoid product language, as well as technical and industry jargon.
  • Identify success criteria — Defining success criteria for each task will help you measure the percentage of tasks participants complete correctly.
  • Don’t lead them on — Avoid directly instructing the participant on how to complete the task at hand. You will want to observe how the participant would work through the task on their own.

Examples of realistic tasks:

  • Verbalize your understanding of the information on this page.”
  • “Demonstrate how you would add the Bronze Independent Health Plan to your cart.”
  • “Compare the benefit details of Dental Plan A and Dental Plan B, then purchase the plan that best meets your needs.”

5. Identify Success Criteria

It’s important to define what success looks like for each test task to determine if the participants have successfully completed each task.

For example, the task would ask the participant to “Identify the employer contribution towards the dental plan.” In the following image, the employer contribution towards the dental plan is $20.00/MO.

Dental Plan details user interface (UI)
Dental Plan details user interface (UI)

For this task, you might define the success criteria as “The participant identifies $20.00/MO as the employer contribution towards the dental plan.” If the participant identifies that information correctly, you can count that as a success.

If the participant doesn’t identify that information or doesn’t attempt to complete the task, then you would count that as a failed attempt of the task.

Identifying success criteria allows us to measure the percentage of tasks the user completes correctly by using the success rate metric. This metric can be used to provide a general picture of how your site supports users and how much improvement is needed to make the site really work. You can compare these metrics against those obtained with the previous version of the site to see if there was any improvement. This can be used to measure progress towards better, more usable designs.

6. Include Some Quality Metrics

Gathering qualitative insights from participants is great, but did you know you can also measure a participant’s performance? You can measure usability by adding some quality metrics to your test, including:

  • Success rate
  • Time on task

The success rate metric measures the percent of tasks that participants complete correctly during the usability test. This is a course metric and it does not tell you exactly why participants can’t complete tasks or how well they performed on the tasks they did complete. The user success rate is calculated using the following equation where x is the number of tasks completed successfully and y is the number of totals tasks undertaken:

(x ÷ y) x 100 = success rate

Example: You asked five users to complete six tasks each, so you observed 30 attempts to perform the tasks. Of those attempts, 19 were successful and 11 were failed (based on the success criteria you set). Based on the information above, the success rate would be 63%.

(19 ÷ 30) x 100 = 63%

The time on task metric measures the total duration participants spend doing a single task. Record the time in seconds format (e.g. 02:15 = 135 seconds) of how long it takes a participant to complete that single task.

Example:

Calculate the arithmetic mean or the average time to complete one task by dividing the total time to complete all tasks by the number of tasks completed.

You can use this metric to compare task times from an original design versus a redesign to measure usability. We can assume that shorter processing time correlate to better the user experience. It’s important to identify tasks that take participants a long time to complete, and then examine why it took them so long to complete those tasks.

Additional metrics used to measure effort, satisfaction and ease include:

  • Customer Effort Score (CES) — Measures experience that rates a specific interaction point.
  • Customer Satisfaction (CSAT) — Measures short-term happiness, or how a customer feels about a specific service or product.
  • Single Ease Question (SEQ) — A seven-point rating scale to assess how difficult users find a task. Usually administered immediately after a user attempts a task in a usability test.
  • System Usability Scale (SUS) — Reliable tool for measuring usability. SUS consists of a 10-item questionnaire with five-point rating scale (Strongly Agree to Strongly Disagree).

7. Pilot Your Test

You’ve written your test scenario and tasks and are eager to get this study in font of participants. Before testing with your target audience, it’s essential to pilot your test. Running through a mock session in advance helps you:

  • Identify issues within the test workflow.
  • Correct website or design issues.
  • Refine the scenario and test questions.
  • Become more comfortable facilitating the test.
  • Ensure your tasks capture the study objective.

You can run a pilot test with yourself, a colleague, friends, or even family. The most veteran usability test facilitators consider pilot tests as a best practice. Piloting your test ensures your study will run smoothly.

8. Gain Consent

As part of good research ethics (and possibly legal obligations), you must get informed consent from the participant before conducting the usability test. When using an unmoderated testing platform like usertesting.com, gaining the participants consent is taken care automatically. If you’re conducting a moderated test, it’s your responsibility to inform the participant how their identity and feedback will be collected and used.

Be sure to explain:

  • How their feedback will be used for the study.
  • That the study is voluntary, and they can end the session at any time.
  • That the session will be recorded, how it will be recorded (screen-recorder software, web meeting recording, video camcorder, etc.), and how it will be stored (if applicable).

It’s good practice to assign each participant a unique ID number. This allows you to identify the participant in the analysis and report without using the participants real name, which could expose their personal identifiable information (PII).

9. Record Your Test

When conducting a usability test, it’s always a good idea to record your session. Recording the test allows you to go back and take notes, create short video clips, or share the video to help communicate study results.

Remember, you must gain consent from each participant to record the session. If the participant agrees to be recorded, it’s important to remove any identifying details before you share the video internally or online. You might need to blur the screen to hide their face or personal information they input into the website, or even distort the participants voice to protect their PII.

If you have multiple test recordings you should standardize a location to store them. This could be a platform like Vimeo or an external hard drive or a password-protected folder on the network.

10. Avoid Research Bias

While facilitating a usability test, it’s important to avoid bias. Research bias occurs when we consciously or unconsciously influence the results of the study to get a certain outcome.

Research bias can cloud the results of your study. The purpose of the usability test is to observe real-life interactions between the participant and the website. This won’t be a pure observation if the participant has preconceived notions of how they should act or perform during the usability test.

A few things to keep in mind to avoid research bias:

  • Keep your body language and speech as natural as possible.
  • Remain neutral and don’t project your own bias onto the participant.
  • Provide guidance without leading the participant.
  • Do not introduce new information or give unintended clues that might invalidate the session.
  • Analyze all the data, even if it doesn’t seem useful, to avoid only interpreting the feedback that supports your hypothesis.

So, there you have it — my top tips for a successful usability test! I hope this information helps you with your next research study. Happy testing 😊

Upvote


user
Created by

Jenna Kreiss


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles