Data and UX Research: Making Connections
Using numbers fearlessly in your Human Centred Design work
In 2008 I was a Master’s student in the department of Economics at McMaster University. Shortly after graduating I began working as a civil servant in capital planning and policy, and I held this position for nearly four years. For nearly ten years, I worked with numbers — preparing spreadsheets, performing econometric analysis, creating data sets. And yet, in all that time, I only had one person ask, specifically, “But why do we care?”
It was a professor who otherwise did not seem particularly fond of the human species as a whole. And yet, every class, once we had discussed the basics of the papers we had been assigned as readings, her first question was, “Why do we care?” In other words, what is the real meaning, the significance, of the questions being asked, the data that was used, and the numbers that were generated in the author’s analysis? What, more precisely, is the human impact?
I completed that course prepared to apply this perspective to my policy work, only to find that my higher-ups in the civil service didn’t actually have a lot of interest in discussing the real human impact of anything that I worked on.
It’s easy, I think, to get tunnel vision when you’re working with numbers and figures all day, and to lose sight of the real effects on real people of the solutions you put forth. I was never able to shake the feeling that we were missing this part of the equation.
Fast forward to 2020 and I was once again in school, this time for an accelerated UX Design program at BrainStation. As we worked our way through our unit on research, I realized that this was the perspective that had been missing all along.
I had lived in a world where the term “data” only ever implied quantitative figures; but the qualitative data that is gathered using UX research methods brings the focus back to the heart of the matter: people.
What is data?
Data is any information that we can observe and measure. If you can see it, or you can count it somehow, it can be collected as data points, and those points can be collected and organized into data sets. Generally speaking, you can sort data into quantitative (data with a unique numerical value associated with each point, which can be analyzed using mathematical methods) and qualitative (data that can be observed by various methods but that does not have a numerical value associated with it).
Data analysts generally only work with quantitative data, using a variety of mathematical and statistical tools to find patterns and connections within the data. I would loosely categorize the types of figures that a data analyst can produce into three categories:
1) individual data points for a specific variable (ex. for a clothing retailer, it could be “Gross total sales, December 2020”)
2) data for a specific variable across time or in various locations (ex. “Gross sales per month in 2020” or “Gross sales per location in December 2020”)
3) the effects that changes in one variable can be shown to have on changes in another variable (ex. “effects of marketing campaign on gross monthly sales”)
For the third type of measure, a data analyst will at best be able to establish correlation between the variables (that is, that they seem to be changing in some sort of relational way) but never causation (that is, that one variable change certainly caused the other to change in a specific way).
Unless there is information about the motivation for each individual data point (such as a survey taken by every customer of the clothing retailer asking whether and how much the marketing campaign affected their decision to purchase an item), causation can’t be validated.
A data analyst can also determine whether the correlation is of statistical significance — that is, whether the change in one variable relative to the change in another variable is big enough to be of any real interest. This helps to focus in on the best ways to effectuate desired change (such as increased sales.)
I hope this description wasn’t too heavy! If there’s one takeaway that I hope I managed to convey, it’s that quantitative data will tell you what is happening, but it won’t tell you why. This is where qualitative data can fill in some major gaps.
Looking at Personas
It was the classes we spent discussing user personas that got me thinking about the interplay between quantitative and qualitative data. Looking at a well-crafted persona, you can start to see the importance of both kinds of data in understanding your core users.
Let’s use this example of “Casual Cathy”:
What parts of this persona could we measure with numbers and figures? To start, her demographic details could likely be mapped to census data— if you looked up figures for women in her age group in Toronto, for instance, you could probably find support for the likelihood that they lived with their partner, worked in a creative field, and voted NDP.
Similarly, you could probably find survey data that would support the behaviours identified here, and you could confirm that people with all of these characteristics tend to get a lot of their current events knowledge from social media, and more specifically Facebook and Instagram. Again, we can use quantitative data and analysis here to identify what is happening.
But as we know, in human-centred design, the why is often the key to creating a truly outstanding experience for your core users. Your user’s goals, motivations, and pain points are difficult to describe as numerical values — even a survey with self-reported ratings of, say, the difficulty level of a task will produce highly subjective results. We use UX research tools such as interviews, empathy mapping, journey mapping, and more to gather qualitative data that will give us the insight into causation that quantitative data can’t provide.
Working With a Data Analyst
In the world of quantitative data analysis, the most reliable data sets are generally ones that have at least hundreds of observations (data points). Data sets of ten or fewer observations are typically not useful. However, when we perform UX research, you can start to see patterns of motivations and pain points emerging after as few as ten interviews or user tests.
It seems to me that while people might make different choices for their behaviour (often based on constraints they may face), people’s goals and frustrations tend to be much more universal. So it’s important to understand that, if you’re working with a data analyst directly, you are often going to find yourselves looking at the same problems from different angles.
Here are my suggestions for how best to work with a data analyst when performing UX research:
- Really understand what you’re trying to explain
Having a clear, concise, and common understanding of the problem you’re trying to address will help you to select the information that will be useful to you in your analysis.
- Determine ways to measure the behaviour you’re trying to explain
It could be that there’s an existing data set that will neatly address the questions that you’re asking. More likely, however, you’ll need to understand how to proceed with incomplete or imperfect data. For example, even if the exact measure that you’re looking for doesn’t exist, there could be a proxy measure that you could use instead that will still add to the picture you’re painting.
- Ask: “If x changes, do we see any change in the measures that we’re interested in?”
Here is where you might be able to find some correlation to investigate further.
- Use whatever you learn from these inquiries to inform the questions you ask of real people during your interviews and user testing
This approach will likely lead you to ask better questions, which will hopefully lead to more insightful research findings.
One Final Thought: What’s Missing?
Whether you’re working directly with a data analyst or reading about the findings of a study, a discussion of the limitations of the data is always appropriate. Quantitative data sets are rarely ever a perfect fit to answer our questions, and the reasons for this are many — privacy concerns, interruptions in survey taking, selection bias, or sometimes what we need is just not actively being measured.
Whatever the case, don’t be afraid to ask what’s missing from the quantitative data that you’re using. In my experience, this can sometimes be just as interesting as the numbers you do have to work with!