cft

How to Measure Product Maturity- a Guide for Early Stage Investors & Founders (Part II)

People, process and revenue are leading indicators


user

Liron Pergament Gal

2 years ago | 9 min read

Introduction

Early-stage investors and startup founders, often use qualitative and subjective heuristics to evaluate the product maturity of a startup. These heuristics fall into three categories: people, product & market.

Given the many inherent unknowns, that evaluation could very quickly become outdated.

An evaluation that evaluates a startup’s learning progress helps overcome that, as it is widely accepted that a startup’s top priority should be maximizing the speed of learning.

My previous post provided a framework for VCs and founders to measure a product’s maturity through milestones in the company’s journey towards product-market fit (PMF).

This post will outline three additional measurements for product maturity, that together with this framework complete the picture and help assess the likelihood of the company achieving PMF and scaling successfully:

  1. People and learning aptitude
  2. Learning processes
  3. Early revenue

People & Learning Aptitude

Most early-stage investors will rightfully say that the main factor affecting their decision to invest in a company is the team. But what exactly does that mean? How can that be measured?

The objective measure of a team, from a product perspective, is their learning aptitude, which includes two major components: domain expertise, and skills that are critical for learning. By analyzing these areas, investors can gauge the maturity of the team, and their likelihood of achieving PMF. This enables them to reach a data-informed decision for investment, as well as identify red flags.

Domain Expertise

The team needs to become experts in the domain they are in. They need to understand the problems in the industry they are addressing; what existing solutions exist already? What are their strengths, weaknesses, and gaps? They should also understand the “market type” they are in (new/ existing/ resegmented/ niche), in order to develop the right strategy.

For the team (and potential investors) to understand where they stand in this aspect, a simple calculation can provide answers: Summarize the total number of years of domain experience across the team and advisors. Team members should be weighted 100%, advisors should be weighted based on their time investment (usually anywhere between 5%-40%).

A result over 20 is excellent; anywhere between 5–20: this could be a cause for slow progress in the PMF milestones; gaining more expertise could be the solution. Anything less than 5 should be immediately addressed by:

  • Recruiting advisors
  • User interviews
  • Conferences
  • Hiring people that do have domain expertise

Let’s look at what can be done through a real-world example-

I am currently advising a company whose founders entered a domain they had no connections or experience in. However, they did recognize this as a domain with many unsolved problems, and heavy monetary loss caused by those problems.

They spent an entire year just learning, building relationships, evaluating and talking with advisors. They read all of the industry news and attended professional industry events.

They learned about past failures in the space as well as who the key opinion leaders (KOLs) are. By analyzing those past failures, they came up with a plan for a solution (composed of a product and services around it) that they started pitching.

By the end of that year, they already had a queue of potential clients so impressed by their understanding of their problems, and their ideas for solutions, just waiting for it to be ready so they could purchase it- all this before writing a single line of code.

Coverage of skills critical for learning

In most new products today, the technological risk is minuscule in comparison with the risk of failing to offer a product the market needs.
Therefore, the most important skills for early-stage companies to have are those that accelerate the learning path in order to conserve cash until spend is actually needed.

This doesn’t mean each of the following roles needs to be hired on day one, but the team should have sufficient coverage and experience in the following skills through hiring, advisors, or consulting services.

  • Core Product management skills:
    A skilled product person will drive “disciplined” learning cycles- they will list and test hypotheses instead of encouraging an echo chamber of validation for assumptions; they will focus on discovering answers for known unknowns, as well as discovering the unknown unknowns; they will understand the importance of acquiring the RIGHT early adopters who will give meaningful feedback before creating complex technologies; they will continuously aim for lowest fidelity prototypes for this testing in order to reduce waste and learn quickly
  • UX skills:
    With the proliferation of mobile apps, people are now accustomed to higher standards of UX, and on its own, this has become a cause of failure for 17% of startups. Good UX doesn’t start with creating mockups as many people assume, it starts much earlier with research. Experienced UX professionals will create a systematic study of target users and their requirements- define personas, understand their pains and gains, in order to develop a value proposition that solves the user’s problems, and not just “looks pretty”.
  • Product marketing skills:
    In these early stages, the team will need to be able to message a compelling value proposition and iterate rapidly on it. Datasheets and whitepapers can be utilized for testing problems or market hypotheses early on, without needing to develop any code.

A large percentage of the team with the following skills can be a red flag:

  • Engineering: companies that hire significant engineering and invest a lot of development before validating their market and problem hypotheses create a lot of code waste and an inflated burn rate. Caution should be taken especially in companies where the founders all come from technological backgrounds- incredibly smart people with deep technological backgrounds can spend a lot of money and time developing complex technologies without thinking about the burn rate or the actual problem they are solving.
  • Sales: founders need to be the first sellers. A sales team can’t be expected to sell a product if the value proposition isn’t proven. At most, if salespeople are hired, their initial goal should be to create connections and build relationships.

The company I mentioned earlier was an excellent example of hiring the right talent at the right stages- while they were learning about the market and problems they wanted to address, they experimented with thought leadership discussions in conferences; they wrote collateral in order to test their understanding of the problems they wanted to address.

By the end of the first year, they still hadn’t hired a single engineer, and yet already had a waitlist of customers eager to pay for the solution they were talking about. That was the right stage to start iterating on the actual solution.

Compare that to another company I worked with- it was composed of 15 engineers and 2 salespeople- they burned through millions of investment dollars developing a very mature technology that they just couldn’t sell- they had to go back to the drawing board to properly research the market they wanted to address and what problem could their technology solve; they ended up throwing out a lot of that technology,

and are only now making progress towards product-market fit.

Learning processes

Processes should be in place to help the team, not for the sake of having a defined process. They are also changed over time, and should be designed according to the needs and challenges the company faces at a certain point in time- therefore there isn’t a “one size fits all” answer.

A well-structured learning process will accelerate the path towards PMF, as well as reduce wasted money and resources. This process should be outcome-driven, measurable, and fast.

Phrasing questions around the following indicators will help assess the learning process and its efficiency:

Outcome-oriented mindset (as reflected in the roadmap and work in progress): a roadmap that is phrased in learning goals and outcomes (an impact on user behavior that can be measured), and not outputs (features) leaves room for experimentation and learning.

Companies who operate as feature factories create a list of features and dates (which in actuality is a release plan, often mistakenly referred to as a roadmap)- this glorifies shipping and delivery instead of outcomes and creates a lot of waste.

Measurement (as reflected in metrics and definitions of tasks) — for every single task, the team needs to define (in advance) what success (the expected impact on user behavior) looks like, how it is going to be measured and what decision would follow based on the result.

By asking questions about features that have already been delivered, investors and team members can assess how data-driven the team’s process is, and whether they are learning from past mistakes.

Speed (meaning how quickly does the team receive feedback, and how quickly is the team able to incorporate their learnings from this feedback). Delivering a useless feature quickly doesn’t actually deliver any value. Speed shouldn’t be measured by engineering velocity, but by how often is new feedback received and acted on.

The frequency in which the team has customer meetings (at least 5 meetings a week) and the time it takes them to create the next prototype or testable product with those learnings incorporated (1–2 weeks at most) are good indicators of speed.

I applied this to a startup I recently worked with. Their main value proposition involved users running searches on the data collected by the product. The search interface was very complicated and required the learning of a scripting language which, naturally, led to customers not using it.

The team knew that customers weren’t using the product, but weren’t actually focusing on any improvements to this- they had a predefined list of features they had put together and were executing on, without measuring the impact of any of those features.

It was very clear to me those customers would churn, and there was also no point in investing in additional features if customers couldn’t utilize the core value proposition from the product.

I proposed we ax the original roadmap, and focus on what is actually needed to validate the solution hypothesis- solving the actual problem customers bought the product for.

We phrased the roadmap and the expectations from the team in the following way: “X% of customers run Y+ number of searches per day”. This transformed the roadmap to a decision-making framework, that enabled the team to continue iterating and make decisions (in this case around how to increase search usage) in order to achieve the company goal.

They ended up adding canned fields to the search for the most commonly needed parameters, and therefore massively simplifying it. After each change, they measured the impact on usage and reached a decision regarding what to do next, until they validated the solution hypothesis.

Early Revenue

Revenues are one of the very important financial indicators of success and they should not be ignored. However, oftentimes in early-stage startups, the revenue can be misleading. For companies that do have some revenue, it is important to ensure that paying customers have the same use case that matches the value proposition.

The most simple, negative example of this is the “Statement of Work” revenue, where essentially the revenue comes from professional services- the company creates a “product” for each potential customer- a common phenomenon with large enterprise clients.

An early-stage startup I worked with, had around $700K ARR from 6 different customers. On paper, this looked pretty good; however, it did not take a lot of time to see that every customer was using the product for a different purpose, and therefore pulling the team into different directions, blocking them from being able to formulate a real strategy.

Not only that, but the sales cycle was very opportunistic- these early customers all had connections with investors or the head of sales, but there was nothing repeatable in the sales cycle, so for the entire 6 months after that, they did not close a single deal.

Eventually, the largest customer was the one who represented the direction that they learned was best for them to go after. They were able to convert 2 others to the same use case and vision and had to part ways with the others.

Lessons Learned

  • Early-stage startups require a different set of metrics, milestones, and founder skills.
  • Evaluating people does not have to be a subjective exercise and can rely on a well-defined framework
  • Investors and early-stage startups should be wary of referring to revenue as a leading indicator of success

Upvote


user
Created by

Liron Pergament Gal


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles