The Mistake We Keep Making
Most ecommerce teams don’t have a conversion problem.
They have a measurement problem. Specifically how they need to prioritize conversion rate vs revenue per visitor.
Somewhere along the way, conversion rate stopped being a way to understand behavior and became the thing we were trying to optimize. Dashboards revolved around it. Roadmaps were built around it. Success was judged by whether that number moved up and to the right.
That shift feels subtle, but it isn’t.

Conversion rate was never meant to be the goal. It was meant to be a signal. A directional indicator that told us something about how an experience was performing. But when a signal becomes a target, it stops being informative. It starts distorting behavior.
This isn’t because teams are careless or naïve. It’s because conversion rate is easy.
- Easy to calculate
- Easy to explain
- Easy to improve in the short term
And in an environment where pressure for results is constant, easy-to-move metrics tend to win.
The problem is that conversion rate tells a very narrow story. It tells you who finished. It says nothing about who struggled, who hesitated, who downgraded, or who would have converted later with the right help.
Yet entire optimization programs were built around it.
When you treat conversion rate as the goal, you implicitly design experiences to force completion instead of create value. Discounts, urgency, simplification, artificial pressure. These tactics work not because they improve the experience, but because they compress decisions.
And compression looks like progress until you zoom out.
That’s how teams end up celebrating wins that quietly hurt the business. Conversion rate goes up. Revenue doesn’t. Or revenue goes up briefly while long-term value erodes.
This is where the real problem starts. Not with conversion rate itself, but with what we asked it to do.
What Conversion Rate Actually Measures
At its core, conversion rate answers a very simple question:
Of the people who showed up, how many completed a transaction during that session?
That’s it.
It does not measure:
- Value created
- Quality of decision
- Long-term impact
- Whether the experience helped or hindered the visitor
- Whether someone moved meaningfully closer to buying later
Conversion rate is binary and moment-bound. A person either converts or they don’t, within a fixed window of time. Everything that happens before, during, or after that moment is collapsed into a yes or no.
That doesn’t make it useless. It makes it partial.
Where teams go wrong is assuming that improving this one number means the experience itself improved. In practice, conversion rate is highly sensitive to tactics that compress decisions rather than clarify them.
You can raise conversion rate by:
- Removing choice
- Lowering prices
- Increasing urgency
- Overemphasizing discounts
- Narrowing what people see
- Pushing visitors toward cheaper or faster decisions
None of those necessarily create more value. Many of them do the opposite.
This is why conversion rate is best understood as a directional signal, not an outcome. It can tell you that something changed. It cannot tell you whether that change was good for the business.
Two experiences can have the same conversion rate and wildly different results.
Two experiments can move conversion rate in opposite directions and still produce the same revenue outcome.
Conversion rate tells you who finished.
It tells you nothing about who you helped.
And once you see that distinction clearly, it becomes obvious why treating conversion rate as the goal leads teams to optimize for completion instead of progress.
That’s the trap.
What Happens When a Signal Becomes the Goal
The real problem with conversion rate is not the metric itself.
It’s what happens when teams start treating it as the objective.
Once conversion rate becomes the goal, optimization naturally shifts toward tactics that force outcomes instead of supporting decisions. The work stops being about helping people move forward and starts being about getting a yes as quickly as possible.
This is where the same patterns repeat:
- Heavy reliance on discounts and promotions
- Increased urgency and pressure messaging
- Fewer choices, less context, less exploration
- Design decisions that favor speed over clarity
- Channel pruning that removes “harder” traffic to make numbers look better
Most of these changes will raise conversion rate. That’s exactly why they’re tempting.
But raising conversion rate this way often does one of two things.
Either it shifts people toward cheaper decisions, lowering average order value and margin.
Or it converts people who were not actually ready, at the cost of long-term value, trust, or future purchases.
This is why teams end up celebrating wins that feel uncomfortable. Conversion rate is up, but revenue is flat. Or revenue is down. Or the gains disappear the moment promotions are pulled back.
At that point, the organization isn’t optimizing anymore. It’s compensating.
Conversion rate is extremely sensitive to pressure and filtering. That makes it a dangerous metric to optimize in isolation, especially in retail environments where promotions, seasonality, and channel mix are constantly shifting.
When the signal becomes the goal, teams stop asking whether the experience improved. They only ask whether the number moved.
That’s how optimization drifts from creating value to extracting it.
Why the 98% Matters More Than the 2%
Most optimization conversations implicitly center on the same group: the people who are already ready to buy.
That’s the 2%.
They know what they want. They’ve narrowed their options. They’re close to a decision. Small nudges can push them over the line, which is why conversion rate responds so quickly to urgency and discounts.
But that group is already at the end of the journey.
The much larger opportunity sits with everyone else.
The other 98% are not non-converters. They’re people mid-journey. Researching. Comparing. Exploring. Trying to understand options, prices, trade-offs, and fit. Some will buy later. Some will come back multiple times.
What they share is that they are still making progress, even if they are not converting today.
Conversion rate treats all of these people the same. It collapses every outcome into a binary result.
That framing erases everything that happens before the final decision.
It ignores whether someone found what they were looking for. Whether they understood the differences between products. Whether the experience reduced uncertainty or increased confidence.
When teams focus on the 2%, the experience gets shaped around finishing.
When teams focus on the 98%, the experience gets shaped around progress.
Experiences built for the 98% prioritize clarity over pressure. They help people narrow choices instead of rushing them. They guide rather than squeeze.
Ironically, this is often what creates stronger outcomes for the 2% as well.
Revenue per Visitor vs Conversion Rate: Why the Lens Matters
Revenue per visitor forces a different kind of honesty.
Unlike conversion rate, it does not ask a single question. It absorbs the outcome of many decisions at once. Did people find something relevant. Did they understand the value. Did the experience help them narrow choices. Did they feel confident enough to spend more or come back later.
RPV reflects the combined effect of all of that.
This is why some experiments that “failed” on conversion rate were still wins. Improvements that helped people compare, understand options, or reduce friction often increased RPV even when conversion rate stayed flat.
The opposite is also true.
Plenty of tactics increase conversion rate while quietly damaging RPV. Discounts that pull people into cheaper products. Urgency that rushes decisions. Promotions that close a sale today but train customers to wait or trade down tomorrow.
Conversion rate celebrates those wins.
RPV exposes the cost.
I often joke that I can get any ecommerce site to a 100 percent conversion rate.
Just make everything free.
Great metric. Terrible business.
That joke works because it exposes the flaw in treating conversion rate as the outcome. A metric that can be maximized without creating value is not a goal. It’s a signal that needs context.
That’s what RPV provides.
What Changes When RPV Is the Lens
When revenue per visitor becomes the lens, the questions change.
Teams stop asking whether something increased conversion rate and start asking whether it created value. A test that leaves conversion rate flat is no longer dismissed by default. If RPV improves, the question becomes why the experience worked.
Roadmaps change. Work that improves clarity, navigation, and decision-making becomes easier to justify because its impact shows up in revenue, even when it doesn’t produce a clean conversion lift.
Internal debates get healthier. Product, UX, merchandising, personalization, and experimentation stop pulling in different directions. They’re all accountable to the same outcome: did this increase the value created per visitor.
Conversion rate still matters. It remains an important signal. But it’s no longer optimized in isolation.
When RPV is the lens, optimization stops being about squeezing outcomes and starts being about improving the system that produces decisions.
Leading marketing measurement frameworks emphasize defining strategic KPIs that align with actual business outcomes and revenue growth, rather than optimizing isolated tactical metrics that can distort performance interpretation. (Source: Boston Consulting Group)
A Note on Better Metrics
It’s fair to ask whether there’s an even better metric.
Profit per visitor is ideal in theory, but in practice most experimentation programs don’t have the fidelity to measure it reliably. “Sustainable conversion rate” sounds appealing, but sustainability is an outcome, not a metric.
Retail is volatile by nature. Promotions, seasonality, and channel mix make clean windows hard to isolate.
RPV isn’t perfect. But it’s more honest.
It reflects trade-offs instead of hiding them. It captures value creation instead of just completion. And it aligns teams around outcomes instead of tactics.
The Real Optimization Problem
Most ecommerce teams don’t have a conversion problem.
They have a framing problem.
When conversion rate becomes the finish line, experiences drift toward pressure and short-term wins that look good in dashboards but weaken the business over time.
Revenue per visitor forces a different standard. It rewards clarity over urgency. Progress over pressure. Value over extraction.
Conversion rate still matters.
It’s a signal.
But the real question is whether the experience earned its revenue.
That’s the work.
That’s the shift.
And that’s where healthier growth usually comes from.
Further Reading
Reducing Friction: The Most Overlooked Way to Improve Conversions
Adaptive PDP Strategy: Why Every Product Page Has Two Jobs
eCommerce Friction Points: More Clicks, Better Results
