Łukasz Miądowicz - AI, Growth & Platform PM

The average user doesn't exist - why segmentation comes before data

Why segmentation is the missing step between data, interviews, and the hypothesis you ship

Picture two PMs.

The first lives in Mixpanel and Amplitude. Dashboards open in three tabs. Funnels broken down by every possible event. She's run six A/B tests this quarter. She knows exactly what users do.

The second lives in customer interviews. Talks to ten users a week. Has a Notion full of quotes, JTBD frameworks, mental models. She knows exactly why users behave the way they do.

Both ship features that don't move the needle.

The first builds well-instrumented variations of features no one actually wanted. The second builds features for the five customers she happened to talk to last month - who, statistically, are nothing like her broader user base.

The PM playbook tells us how to fix this: use both. Data tells you what. Interviews tell you why. Triangulate.

Almost.

There's a missing step between data and interviews that separates senior PMs from mid-level ones, and that step is segmentation. It comes before either.

And once you have segmented signal, there's a second discipline most teams skip: turning that signal into a falsifiable hypothesis before you build anything.

This post covers both.

The funnel is an average. The average is a lie.

Every product has a funnel chart on the wall.

Signup → activation → first action → habit → upgrade.

The chart shows blended numbers. "62% of trial users reach activation." "Trial-to-paid is 7%."

These numbers are useful for tracking aggregate health. They are dangerous for making product decisions.

SOLO DEV MID-MARKET ENTERPRISE DEMO SIGNUP TECH REFERRAL 62% AVERAGE (nobody)

Because the "62% activation rate" is an average across:

Five different groups. Five different stories. One blended number that describes none of them.

When you optimize the "62%" you're optimizing for a fictional user who lives in the math. Push it to 68% and you might have shifted enterprise from 20% to 80% activation while solo devs stayed flat - or vice versa. Without segmentation, you'll never know.

Why data without segmentation lies

Aggregate metrics give you the direction of movement and the magnitude. They don't tell you which segment moved.

Two products with identical funnels can have completely different underlying realities:

These products have entirely different problems and need entirely different fixes. The dashboard treats them as twins.

A senior PM looks at every metric and asks: "What segments would I split this by, and would the segments tell a different story?"

If yes, the average is hiding the truth.

Why interviews without segmentation lie

This one bites less obviously but harder.

You schedule five customer interviews. They reply because they're either passionate fans or active frustrators. Neither group is representative. Both will dominate your conversation.

Worse: the customers who churned silently - never opened a support ticket, never replied to your survey, never engaged enough to be reachable - are the ones whose problem you most need to understand. They are systematically absent from your interview set.

Five interviews with the wrong segment is worse than no interviews, because it gives you false confidence.

A senior PM, before running an interview, asks: "What segment am I sampling? Why this segment? What's the bias in who said yes to this call?"

Segmentation is the third leg of the stool

The corrected framework:

Method Tells you
Segmentation Who it happened to
Quant What happened, where, how much
Qual Why it happened

You need all three. In order.

REAL USER PHANTOM AGGREGATE SEGMENTED AVERAGE USER SIGNAL

But signal alone doesn't ship anything. The next discipline is turning that signal into a falsifiable bet.

The full cycle: from hidden signal to validated outcome

Most teams jump from "interesting finding" to "let's build it." Senior PMs add three steps in between: outcome, hypothesis, experiment.

Here's the full cycle:

SEGMENT QUANT QUAL OUTCOME HYPOTHESIS EXPERIMENT VALIDATE 1 2 3 4 5 6 7 if hypothesis fails → back to qual

1. Segment. Who are we actually talking about? Pick 2-3 axes that might matter (persona, plan, cohort, behavior). Not all will. The point is to test which ones do.

2. Quant - within segment. Open the dashboards, but don't look at the blended funnel. Break it down by your segments. Look for drop-offs that are concentrated in one segment, or steps where two segments behave dramatically differently.

3. Qual - within segment. Pick the segment that's hurting most (or has the highest leverage) and talk to those people. Not "users." Not "customers." That segment. 5-8 conversations. Ask about behavior, not preferences.

4. Outcome. Define the metric you want to move - for this segment. This is where most teams skip ahead and pay for it later.

Bad outcome: "improve activation." Good outcome: "move mid-market trial-to-paid from 3% to 6% in 90 days."

The good version is segment-specific, has a baseline, has a target, has a deadline. Without all four, you can't tell if you won or lost.

5. Hypothesis. State the bet in a falsifiable form. Use this structure:

IF we [INTERVENTION] for [SEGMENT] THEN [METRIC] moves by [MAGNITUDE] BECAUSE [MECHANISM] ← the part most teams skip

The mechanism is the part most teams skip. Without it, you can't tell what you learned from the experiment - only whether the metric moved. The mechanism is what generalizes.

Bad hypothesis: "Let's add a trial summary page." Good hypothesis: "If we add a shareable trial-summary PDF for mid-market signups, then their trial-to-paid will move from 3% to 5%+ because the technical buyer needs an artifact to convince their budget owner without re-explaining the product."

If the metric doesn't move, you didn't just fail to build a feature. You learned the mechanism was wrong - and that tells you where to look next.

6. Experiment. Design the smallest credible test of the hypothesis.

Not the polished feature. The minimum thing that would shift the metric if the hypothesis is right.

Define success thresholds before you launch. "We'll call this a win if the segment moves by at least X over Y days." Pre-registration prevents the after-the-fact rationalization that kills most experimentation cultures.

7. Validate - within segment.

Did the metric move, for the segment you targeted?

VALIDATE YES, ENOUGH Ship broader. Check if mechanism generalizes. YES, SMALL Iterate experiment, not hypothesis. Same bet, better test. NO Mechanism didn't fire. Back to qual: what did we misunderstand? A killed hypothesis sharpens the next one.

Three possible outcomes, and each one teaches you something different:

The cycle isn't a one-shot waterfall. It's a loop. Most segments need 2-3 hypotheses before something lands.

A worked example

Here's the cycle in action.

Worked on a SaaS activation funnel where blended trial-to-paid sat around 7%. The team had spent a quarter trying to move it. They couldn't.

Step 1 - Segment. Split by company size and acquisition channel. Two axes felt likely to matter (different buying motions, different evaluation criteria).

Step 2 - Quant. Trial-to-paid by segment:

TRIAL-TO-PAID BY SEGMENT SOLO DEVS 14% MID-MARKET 3% ← the leak ENTERPRISE 22% BLENDED 7% (describes nobody)

The blended 7% described nobody. Solo devs were doing fine. Enterprise was doing great (because sales pulled them through). Mid-market was where the leak was.

Step 3 - Qual. Pulled 8 mid-market signups who hadn't converted. Asked: walk me through the last 30 days since you signed up.

The pattern emerged after the fourth call. The technical user signed up, evaluated the product, decided "yes, this works." Then they hit a wall: they needed to convince their manager. Their manager wanted to see security posture, projected cost, what was actually configured- and the technical user had nothing to show. No artifact. No summary. They were re-pitching from memory and most of them gave up.

The product worked fine. The handoff to the budget owner didn't exist.

Step 4 - Outcome. Move mid-market trial-to-paid from 3% to 6% within 60 days.

Step 5 - Hypothesis. "If we generate a shareable trial-summary page for mid-market signups, then their trial-to-paid will move from 3% to 6%+ because the technical evaluator can use it to convince their budget owner without re-pitching."

Mechanism: the technical user is already convinced. The blocker is internal selling. We're not helping the user evaluate - we're helping the user sell internally. That reframe is what made the hypothesis sharp.

Step 6 - Experiment. Smallest credible test:

Pre-registered success: mid-market trial-to-paid up to 5% within 60 days.

Step 7 - Validate. After 60 days:

The hypothesis landed. We shipped it to 100% of mid-market.

Now the second question: does the mechanism generalize? We checked the summary page's effect on solo devs. Marginal lift (~0.5%). Made sense - solo devs aren't selling internally. The mechanism was segment-specific, exactly as the hypothesis predicted.

Blended trial-to-paid moved from 7% to 11%.

We could not have built this from the aggregate funnel. We would not have heard this story from a random sample of interviews. The hypothesis would not have been sharp without the mechanism. The experiment would have over-built without the smallest-credible-test discipline. Every step did work the others couldn't.

Three ways segmentation goes wrong

1. Over-segmenting into vapor.

Slice into 47 segments and none have enough volume to decide on. You end up with statistical noise everywhere. Fix: start with 2-3 axes you have reason to believe matter. Add more only when the signal demands it.

2. Segmenting after the fact.

Running an A/B test on the whole population and then asking "did it work for enterprise users?" is fishing. You'll find a positive segment somewhere by chance. Fix: pre-register your segment hypotheses. Decide before the experiment which segments you'll evaluate.

3. Segmenting by demographics instead of behavior.

"Company size" is sometimes meaningful but often a weak proxy. "Did they connect their first data source in the first session?" is usually a much stronger predictor of future behavior. Behavioral segments are tied to product reality. Demographic segments are tied to marketing reality. Both are useful - but for different decisions.

Three ways hypothesis-driven work goes wrong

1. Skipping the mechanism.

"We'll add a trial summary and see if conversion goes up" isn't a hypothesis - it's a feature request with a measurement strapped on. Without the mechanism, you can't learn from a failure. You only learn whether the metric moved. The mechanism is what makes the experiment informative, not just successful or unsuccessful.

2. Choosing experiments that can't fail meaningfully.

If the experiment is so big that you'll ship it regardless of the result, you're not running an experiment - you're rationalizing a roadmap. Good experiments have a real chance of being killed by the data. If your team can't name a result that would cause you to not ship, you don't have an experiment.

3. Validating against the wrong metric.

Building for a segment but measuring aggregate movement guarantees disappointment. A 3% lift in mid-market shows up as a barely-detectable wiggle in the blended number. Always validate against the segment-specific metric you defined in step 4. Aggregate movement is a secondary check, not the primary one.

The takeaway

Data tells you what happened.

Interviews tell you why.

Segmentation tells you who it happened to.

Outcomes tell you what you're trying to change.

Hypotheses tell you what you believe will change it, and why.

Experiments give you the smallest credible test.

Validation tells you whether the bet paid off - and what to bet on next.

Skip any of these and you'll optimize a funnel that describes nobody, ship features for the loudest customers, build the wrong intervention for the right segment, or ship the right thing without ever knowing why it worked.

The funnel on your wall is a story about a person who doesn't live in your product. Find the people who actually do - then bet, in writing, on what will change their behavior.

If it isn't measurable in a defined segment, it isn't a decision yet.

View original

#Activation #Analytics #B2B #Customer Interviews #Customer Segmentation #Data Analysis #Decision Framework #Funnel Analysis #Growth PM #PLG #Product Discovery #Product Management #Product Manager #Product Strategy #Product-Led Growth #Retention #SaaS #Senior Product Manager #User Research #first principles