How to Become a Data-Driven Product Manager
In a world where statistics are kept on everything, finding data is easy. Understanding how to use it is the hard part.
Joe Van Os
“If you can’t measure it, you can’t improve it” — Peter Drucker
When it comes to building products, data-driven decision making has been positioned as an essential component of success.
Unfortunately, many teams don’t have the luxury of a Data Scientist or a UX Researcher on staff, which can result in their data-driven journey starting without a proper understanding of how to use data. This can be risky, as data can easily be misused, misleading, and manipulated.
To make sure data doesn’t steer your team in the wrong direction, it’s important to understand common data pitfalls, and the basics of how to effectively use data.
Fooled By Data
Every day we are exposed to an exceptional amount of information. To make sense of it, our brains are wired to search out patterns. This is called Apophenia or Patternicity, and it can become a major pain when analyzing data.
A classic example is the coin flip test, answer this before reading on:
If I flip a coin 9 times, and it lands heads each time, what is the probability (in %) that the 10th flip will be tails?
The pattern of 9 heads in row influences people to give a higher probability of the tenth flip being tails. This is a logical fallacy. Every flip has a 50% chance of landing tails. Previous coin flips don’t impact the current coin flip. So why’d this trick us?
Psychologists attribute this trickery to the Law of Small Numbers. The human brain is wired to look for patterns, but is poor at understanding probabilities, including the impact of randomness within a small sample size.
We understand that a coin flip has a 50% chance of landing heads or tails, so a set of coin flips should average a 50/50 split. When we see a pattern of 9 heads in a row, we subconsciously weight the probability of the next coin flip being tails. The pattern influences our reasoning.
Luckily there are ways to limit the risk of being fooled by randomness.
Statistically Significant Sample Sizes
If we flipped a coin thousands of times, the result would be roughly a 50/50 split between heads and tails. This is the Law of Large Numbers — the average found from a large sample should equal the probability of any individual trial (in this case, a single coin flip).
Many teams begin their venture into quantitative research through A/B tests. A/B tests expose two variants of the same feature, each to a sample set of users. Engagement with each feature is measured, and the option that performs better is chosen. This can be comparing a current feature against a potential new variant, or two variants of a new feature.
If the sample size is too small, the results won’t represent the diversity of the overall user base. As seen in the coin flip test, we underestimate the randomness that can happen within small samples. To minimize the impact of randomness, data being analyzed needs to reach a level of statistical confidence.
Statistical confidence is reached when the sample size is large enough to accurately represent the overall user base. Determining the required sample size can be tricky for those of us who aren’t trained data scientists, so here is a great calculator.
A general rule of thumb is to reach a 95% confidence level, meaning we are 95% confident the test results weren’t random. Without reaching a high level of confidence we risk the following errors:
- False Positives (type I error) — The researcher wrongly detects a positive change, or a pattern that does not exist. This leads to wasted time and money, as a feature is built that adds no additional value.
- False Negatives (type II error) — The researcher misses the positive change, and the test is deemed a fail. This leads to missed opportunities for improvement.
Craft Questions Carefully
Accidently or not, questions can be structured to lead a respondent to give a preferred answer. A good UX researcher understands how to phrase questions in a way that encourages the respondent to answer honestly.
Do you think the current colors used in Product X are ideal?
is a lot different than:
Don’t you think the current colors used in Product X are ideal?
The second question is loaded: it steers the respondent by inferring that the colors used are ideal. Due to a response bias where people have a tendency to want to fit in, the respondent will likely choose to agree.
Research is not about finding the results we want. It's about determining the current state of reality. Learning is the ultimate goal, our hypothesis being ‘right’ or ‘wrong’ is a moot point.
Quantitative > Qualitative?
Qualitative, or descriptive data, is consistently undervalued. It’s not numbers based, therefore it’s viewed as too subjective and less reliable. However, depending on the intention of the research, qualitative data can be far more useful than quantitative (numbers based) data.
Product design is all about connecting with humans. We not only want to build a product people can use, but also one they like. What people like is tangled in complexity, as it’s typically not a single trait that causes us to like something, but a number of factors that combine to form our opinion.
Complexity makes emotional connections hard to measure. This is where qualitative research shines, as it allows the researcher to understand the complex connections that make up the emotional attachment.
What We Measure is Who We Become
Product Analytics tools are a data-driven Product Manager’s best friend, as we can measure exactly how people use our product. However, it’s important to understand that by choosing to measure something, we are incentivizing our team to prioritize the measured thing.
Incentives can negatively impact behavior. Incentivizing the focus on one thing inadvertently dis-incentivizes the focus on others. The product team will naturally focus on optimizing areas being measured, those metrics will rise, and everyone will high-five for a job well done. Meanwhile, other important areas are ignored simply because we aren’t measuring them.
Metrics are a very powerful tool, and will quickly become the team’s compass as they help us understand if we are heading in the desired direction. This makes it critical to constantly evaluate that measured metrics line up with overall business goals. Otherwise, we risk the metrics pointing the team in the wrong direction.
Keep an eye out for vanity metrics, which is measuring metrics that make us look good, but do not correlate with overall success.
For example, if the only metrics tracked are tied to individual feature optimization, the result will be excellent individual features. However, these metrics don’t necessarily reflect the overall user experience. Look for ways to track and optimize the user’s journey through the entire product.
Not All Problems Are Created Equal
When it comes to decision making and problem-solving, understanding when to use data, and how much data to use, begins with first determining the importance of the decision at hand.
We tend to believe that most decisions we face are important, when we are really confusing importance with urgency.
“What is important is seldom urgent and what is urgent is seldom important.” — Dwight D. Eisenhower
Urgent decisions need to be made right away. Important decisions have a high degree of risk and impact. Since urgent decisions require immediate attention, they give a false impression of importance.
It’s estimated we make 35,000 decisions per day, and the majority are unimportant. The right metrics and research are important filtering tools for helping us cut through the noise, and understand what is important versus urgent.
When faced with a problem, don’t jump straight into solution mode. Take a step back to understand if the problem is worth solving in the first place. This process will help determine the importance, which will guide the overall effort we invest in it.
Often, choosing not to solve a problem is the best decision, as it allows us to spend more time properly solving important problems.
Joe Van Os
Constantly discovering what it means to be a Product Manager, and passing on what I learn along the way.