How to set up a research study

Learn how to evaluate your designs with research.


Meghan Wenzel

3 years ago | 5 min read

You’ve created some designs and want to collect feedback on them. You want to run a research study but aren’t quite sure where to start… What’s involved? What kind of testing should you do? How should you structure the interviews? First you need to determine what you want to learn. From there you can begin planning your study, selecting your methods, and setting up your discussion guide.

Step 1: Define What You Want to Learn

In order to have a successful research study, you need to meet with your stakeholders to determine the project’s goals and scope. Crystallizing and aligning on your research goals will provide the framework for the entire study. Once you clarify what you want to learn, you can craft your research sessions accordingly.

Step 2: Plan Your Study

For a typical study, the main steps include:

  1. Meeting with the Product Manager and Designer (and any additional relevant stakeholders) to determine the business goals, research goals, project scope, and design requirements
  2. Reviewing relevant data analytics and previous research
  3. Determining the relevant research method(s) and target participants
  4. Developing the research script and sharing it with stakeholders for feedback
  5. Screening and recruiting participants
  6. Scheduling research sessions and inviting relevant stakeholders to attend
  7. Sending reminder emails before each session
  8. Running the sessions
  9. Synthesizing and analyzing results (more detail on this here)
  10. Sharing your findings and discussing action items and next steps

Step 3: Determine Your Methods

Since you have designs or prototypes that are ready to test, you’ll focus on evaluative research, such as concept testing or usability testing. (For more in-depth discussion of research method selection, check out this article).

Concept testing is better for earlier stage designs. If you have wireframes or low fidelity prototypes, concept testing allows you to collect high level and open ended feedback to see if you’re on the right track. Usability testing is better for late stage designs. If you have more refined high fidelity prototypes, usability testing allows you to collect focused feedback to rigorously evaluate the effectiveness and intuitiveness of your designs.

Concept Testing


  • Earlier stage
  • Low to mid fidelity prototype
  • High level and open ended discussion
  • Directional feedback on your designs

Sample question: “How would you use the performance data on this page to make decisions? Are there any additional metrics you’d like to see?”


  • Do users’ mental models align with your assumptions?
  • Are your designs moving in the right direction?
  • How can you improve the designs?

Usability Testing


  • Later stage
  • High fidelity prototype
  • More focused and narrow task-based discussion
  • More rigorous evaluation of your designs

Sample question: “Let’s say you want to compare your reports’ performance in Q1. Who received the highest overall ratings?”


  • Are they able to complete the task?
  • Can they articulate what’s going on or what they’re seeing? (Don’t ask “Does this make sense?” — make them demonstrate their understanding)
  • Is anything confusing?

If you pursue usability testing, you’ll then need to consider if you want to do moderated or unmoderated sessions.

Moderated Sessions

Moderator works directly with the test participant, guiding him through the study, answering questions, and troubleshooting any challenges that arise.

  • Pros: can probe on participants’ actions and responses, can provide clarification and troubleshooting if needed
  • Cons: more time and effort intensive in administering
  • When to use: when you need detailed feedback on less refined or complicated designs

Unmoderated Sessions

The participant completes any tasks and answers questions at her own pace on her own time without interacting with the moderator.

  • Pros: less time and effort intensive in administering
  • Cons: no opportunity for detail, guidance, clarification, or probing
  • When to use: when you need high level feedback to validate designs quickly with a diverse or large group of participants

Step 4: Craft the Discussion Guide

Once you’ve selected your research method and type of moderation, you can begin writing the discussion guide. You’ll need to think about the flow of the conversation overall as well as the specific topics and tasks you want to cover.

Designing the Interview Flow

For the overall discussion flow, you’ll want to move from general, high level questions to more focused and specific questions.

Begin with high level questions about the problem space:

  • “When thinking about managing and supporting your team, what are the top challenges you’re focused on?”

Then move to more focused questions:

  • “When was the first time you experienced real value in [product]?”
  • “Walk me through how you’re using [product] now. Feel free to share your screen if you’re comfortable.”
  • “If you went to another company that didn’t use [product], would you be inclined to use [product] again or look to replace it with something else? Why?”

Selecting the Research Tasks

Review your goals and use them to determine which tasks you want users to complete. Which areas of the design are you most focused on? Which sections do you have concerns or questions about? Be sure to include tasks that touch on these each of these sections.

Start high level:

  • “Talk to me about this page. What would you expect to do here”
  • Collect their initial thoughts and impressions

Then dive deeper:

  • “How would you expect to x, y, z?”
  • Make sure they do not feel judged or like they’re being tested
  • Respond neutrally and consistently after each task — don’t praise them for completing it correctly because they’ll notice if you don’t the next time…

Balance giving the participant time to explore the design more naturally with asking them to complete specific tasks. This will let you see how they organically interact with the designs, instead of simply delineating discrete tasks they need to complete.

Keep in mind that for usability testing, the more built out the prototype is, the more authentic your data will be. Allowing participants to interact with a more realistic prototype will allow you to collect richer feedback.

Finally, beware that participants often get hung up on small details if they’re clearly wrong.

Countless times I’ve seen participants call out data visualizations, text, and graphs that clearly show unrealistic data. Including realistic copy and data will help you get the most credible feedback.

Once you have designs you want feedback on, you’ll need to clarify what you want to learn. Based on your context and goals, you’ll select the appropriate research method and moderation style and create a discussion guide that covers the topics and tasks you’re most interested in.

Play around with different methods and script styles, and over time you’ll find what works best for you!


Created by

Meghan Wenzel

As a User Researcher and Strategist, I help companies solve the right problems and build more relevant, efficient, and intuitive products. I started my UX career at a Fortune 500 company, and I've since helped established the research practice at three B2B startups. I'm currently a Senior User Researcher at Unqork, the leading enterprise no code platform.







Related Articles