cft

The Seven Destructive Sins of UX Research

Here are seven examples of poor UX research practice that you can come across in your work with clients — along with some ideas on how to fix them.


user

Saeid Azizi

2 years ago | 10 min read

Most companies would pretend to design products and services that are simple to use. But when you ask customers actually to use these products and services, they often find them far from simple.

Why is there a disconnect between what organizations think of as “simple” and what users experience?

It’s fashionable to blame poor usability on firms not doing enough user research. On the face of it, this seems like an apparent cause of poor usability.

If firms only did the study, they would realize their product was a failure. But, like most simple reasons, it’s wrong.

In reality, there’s never been a better time to be a purveyor of UX research tools. Every organization seems to want to take the temperature of their customers.

Here are seven examples of poor UX research practice that you can come across in your work with clients — along with some ideas on how to fix them.

Credulity

Dogmatism

Bias

obscurantism

Laziness

Vagueness

Hubris

Credulity

The dictionary defines credulity as a state of willingness to believe something without proper proof. The form this takes in UX research is asking users what they want (and assuming the answer).

People don’t have a specific insight into their mental processes, so there is no point asking them what they want.

This quotation from Rob Fitzpatrick captures it perfectly: “Trying to learn from customer conversations is like unearthing a delicate archaeological site.

The truth is down there somewhere, but it’s fragile. While each blow with your shovel gets you closer to the truth, you’re likely to smash it into a million little pieces if you use too blunt an instrument.”

How can we overcome this problem?

The definition of a successful UX research study is one that gives us actionable and testable insights into users’ needs.

It’s no good asking people what they like or dislike, asking them to predict what they would do in the future, or asking them to tell us what other people might do.

The best way of gaining actionable and testable insights is not to ask, but to observe. You aim to observe for long enough that you can make a decent guess about what’s going on.

There are two ways to observe. We can watch how people solve the problem now. Or we can teleport people to a possible future and get them using our solution (a prototype) to see where the issues will arise.

The critical point is: What people say is not as helpful as what people do, because people are unreliable witnesses.

Dogmatism

is the tendency to lay down principles as undeniably true, without consideration of evidence or the opinions of others. The form this takes in UX research is believing there is one “right” way to do research.

I’m sure you’ve worked with people who think that a survey is “the right way” to understand user needs. Perhaps because we hear about surveys every day in the news, people tend to think of them as being more reliable or useful.

But sadly, having a large number of respondents in a survey will never help you if you don’t know the right questions to ask. That’s where field visits and user interviews come in.

Field visits and user interviews give you signposts, not definitive answers. It’s broad-brush stuff, a bit like the weather forecast. There may be some

patterns in the data, but these aren’t as useful as the conversation you have with users and the things you observe them do. It’s those conversations that help you identify the gap between what people say and what they do — and that is often a design opportunity.

But there comes the point when you need to validate your findings from field visits and user interviews by triangulation: the combination of methodologies in the study of the same phenomenon.

Quantitative data tell us what people are doing. Qualitative data tell us why people are doing it.

The best kind of research combines the two kinds of data.

For example, you might choose a survey to validate personas you’ve developed through field visits. or you might choose multivariate A/B testing to fine-tune a landing page that you’ve developed by usability testing.

Bias

Bias means a special influence that influences one’s thinking, especially in a way considered to be unfair.

UX research is a continual fight against bias. There is a handful of different kinds of bias that matter in UX research, but it’s response bias I want to discuss here. This is caused by how you collect data.

Sometimes the bias is obvious. For example, if you ask poor questions, you’re likely to get participants to tell you what you want to hear. You can correct this bias by teaching people to ask the right questions.

But there’s an even more pernicious type of response bias that’s much harder to correct.

This happens when the development team carries out the research and finds that people don’t really have a need for the product or service. It’s tempting to hide this from senior managers because no one wants to be the purveyor of bad news.

But if there’s no need for your product, there’s no point trying to convince senior managers that there is — you’ll be found out in the end. It’s a bad idea to cherry-pick the results to support what a senior manager wants to hear.

It would be best if you didn’t approach interviews with a vested interest: The UX researcher’s job isn’t to convince people to use a service, or to get the results management want; it’s about digging for the truth.

This doesn’t mean you shouldn’t have a point of view. You should. Your point of view should be to help the development team understand the data, not just tell the development team what they want to hear.

Obscurantism

obscurantism is the practice of deliberately preventing the full details of something from becoming known. The form this sin takes in UX research is keeping the findings in the head of one person.

UX research is often assigned to a single person on a team. That person becomes the spokesperson for user needs, the team’s “expert” on users. This approach is a poor way to do research, and not just because the UX researcher doesn’t know all the answers.

The reason it fails is that it encourages the development team to delegate all responsibility for understanding users to one person.

one way, you can prevent this sin on your own project is to encourage everyone on the team to get their “exposure hours.”

Research shows that the most effective development teams spend at least two hours every six weeks observing users (for example, in field visits or usability tests).

What you’re aiming for here is building a user-centered culture. You do that by encouraging the whole development team to engage with users. But you also need to design iteratively. And that takes us to our next sin.

Laziness

Laziness is the state of being unwilling to exert oneself. The form this takes in UX research is in recycling old research data as if it’s boilerplate that can be cut and pasted into a new project.

Our favorite example of this comes from the world of personas.

It is true that clients often approach the process of developing personas as a one-time activity. They will hire an outside firm to do field research with the needed number of users. That firm will analyze the data and create a set of beautifully presented personas.

Now we already know this is a bad idea because of the sin of obscurantism. We want the development team to do the research, not an external firm.

But let’s ignore that issue for a moment. The reason I’m using personas as an example here is that a client often asks me if they can re-use their personas.

They are now working on a new project, which has a passing similarity to one on which they developed personas last year. Since their customers are basically the same, isn’t it oK to recycle the existing personas?

This idea so misses the point of what UX research is about that it serves as a good example.

Here’s a secret many people don’t know: you don’t need to create personas to be user-centered. User-centered design is not about personas. In fact, personas really don’t matter.

Creating personas should never be your goal understanding users’ needs, goals and motivations should be your goal. In some ways, a set of beautifully formatted personas is just proof that you met with users, in the same way, that a selfie with a celebrity proves you were at the same restaurant.

The world you want to move to is one where the development team knows its users so well that personas aren’t needed. You don’t get to this world by recycling old research. You do it by making UX research part of the culture.

Senior UX researchers known for a long time now that you succeed user-centered design by iteration: you build something, you measure its usability, you learn from it, and you redesign.

Re-using old data, whether it’s in the form of personas, usability tests, or field visits, is not iterating — and it’s certainly not learning.

Vagueness

Vagueness means not precisely or explicitly stated or expressed. In terms of UX research, we see it when a team fails to focus on a single key research question and instead tries to answer several questions at once.

This sin is partly caused by the sin of laziness. If you do research only occasionally, you need to answer lots of questions. This means you end up learning a little about a lot. In fact, you can learn an important lesson about UX research from a dishwasher. If you cram a lot in, nothing gets very clean.

With UX research, you actually want to learn a lot about a little. That “little” question is the specific question that’s keeping you up at night.

To uncover this question, we ask the development team to imagine the most useful, actionable research results possible. What would they tell us? How would we use them?

Everyone on the team should agree on the questions you plan to answer and the assumptions you plan to test. These top questions should be the drivers of every research activity.

This means you need to get specific with your research questions: you should be able to articulate your research questions on a couple of small sticky notes.

In fact, that leads us to an interesting exercise you can do to discover your research question.

Sit the development team in a room. Give each person a set of sticky notes. Tell them to imagine that we have an all-knowing, insightful user outside the room who will answer honestly any question we throw at them.

What questions would they ask?

We get the team to write one question per sticky note. After five minutes, we work as a team to affinity sort the sticky notes. Then we dot-vote on the group of questions that are most urgent to answer.

This idea works well because we not only identify the high-level theme but we also have a list of the specific questions to which we need to get answers.

Hubris

Last but not least, we have Hubris. Hubris means extreme self-love or self-confidence.

In UX research, it takes the form of taking extreme pride in your reports. All UX researchers suffer from this to some extent, but those with PhDs are the worst. And we say that as proud recipients of a PhD.5

UX researchers love data. And when you love something, you want to share it with people. So you create detailed reports packed with graphs and quotations and screenshots and callouts. Look at my data! Look at how beautiful it is!

Sadly, few other people are as fascinated by data as we are. our challenge is to turn that data into information and turn that information into insight.

There are two problems with excessive detail.

People don’t read the report. They turn the page, see more data, appreciate how clever you are, get bored, move on.

Overly detailed reports delay the design process. You don’t need to do comprehensive analyses in a spreadsheet to find the top problems. That analysis is useful later when you want to dig into the details, but the critical findings need to be fed back quickly.

This is so the design can be modified, and so the build-measure-learn cycle can continue.

Instead, you need to create information radiators (like usability dashboards and one-page test plans) to get teams understanding the data so they can take action on it. Information radiators are essentially promoting billboards that gradually permeate the team’s awareness of your results.

As a general rule, if people need to turn the page, your report is too long. So ask yourself: how can we capture the results in a single glance?

This could be a concise visual way of presenting research data, like a user journey map, a persona, or a usability testing results dashboard.

Think of a recent project you worked on where UX research failed to deliver the expected business benefits. Were any of the “seven sins” a likely cause? If you could return to the beginning of that project, what would you do differently?

Great design is a symptom. It’s a symptom of a culture that values user-centered design. Bad design is a symptom too. It’s a symptom of an organization that can’t distinguish good UX research from bad UX research.

And perhaps that’s the most destructive sin of them all.

Upvote


user
Created by

Saeid Azizi

Product Designer | Entrepreneur


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles