Retention is the metric most teams trust.
It shows up in every dashboard. It features in every update to senior stakeholders, and when retention appears strong, it can create a reassuring impression that the product is performing effectively.
If users continue to return, it is often assumed the product is effective. However, retention can be ambiguous, as continued use does not always indicate genuine belief in the product. For instance, consider a project management tool adopted by several teams within a large corporation. The platform boasted consistently high retention rates because managers mandated its use. Despite regular logins, feedback from a series of internal surveys showed that most employees found the tool cumbersome and preferred more intuitive options. The high retention figures masked a lack of genuine engagement and enthusiasm among the users, ultimately rendering the tool’s adoption ineffective.

How Retention Became the Stand-In for Product Success
Retention became a key metric for valid reasons. In consumer products, repeat usage often correlates with value. Users typically return only if the product provides utility, entertainment, or efficiency.
Over time, this logic became standard practice. Retention curves were used as indicators of product health, with churn viewed negatively and engagement prioritised.
However, retention is a lagging indicator by definition. It reflects user behaviour after decisions have already occurred. While retention effectively captures what users did and can reveal important patterns about ongoing engagement, it does not provide insight into the underlying motivations or reasons for those actions. This distinction is more significant than many teams realise.
Activation Answers a Different Question Entirely
Retention asks: Did the user come back?
Activation asks: Did the user decide this product was worth coming back to?
To measure activation effectively, identify the key actions or milestones that indicate users have recognised the product’s core value. These could include completing an onboarding process, achieving a specific usage milestone, or using a primary feature that exemplifies the product’s benefits.
By tracking these critical steps, teams can gauge when users begin to form a belief in the product, providing actionable insights for subsequent engagement strategies.
A user can return for reasons that have very little to do with belief:
- They were told to use the product.
- It’s embedded in a process they can’t avoid
- Switching feels risky or inconvenient.
- There is no obvious alternative.
In these cases, retention appears strong, but user conviction is weak. As one user candidly expressed, “We use it because we must, not because we love it.” This often leads to a disconnect: usage metrics appear favourable, yet desired outcomes fail to materialise, and productivity gains stall. Behaviour does not compound, and adoption remains superficial.
Retention is telling you who stayed. Activation tells you who believed.
The Enterprise Retention Trap
Nowhere is this clearer than in enterprise products.
Enterprise tools often report high retention because users are required to log in, workflows depend on these tools, and long-term contracts reduce churn. While metrics may appear stable, a closer examination often reveals a different reality. You recently rolled out a new SaaS platform, expecting it to revolutionise its workflow. Initially, retention metrics looked promising, as employees continued to log in daily. However, surveys revealed that most users felt the system added little value, and they would have preferred their previous tools and solutions. This discrepancy highlighted the pitfalls of relying solely on retention figures.
Research on enterprise software consistently shows that most features in large platforms are rarely or never used. Product analytics firms often report that 60-80% of features go unused. The software is available, but only a small portion of its value is realised.
This is not due to user reluctance or resistance. Instead, the product often fails to demonstrate its relevance to users.
When Retention Metrics Lie by Omission
A report from UMA Technology explains that dashboards can still be misleading even if the data displayed is accurate.
They tell you:
- How often users log in
- How long do sessions last
- How many actions are completed
However, these metrics do not reveal whether users trust the product, rely on it, or would choose it if given alternatives. As a result, many organisations develop parallel systems. Shadow spreadsheets, persistent email workflows, and offline processes continue as backups.
The product may not have failed, but it has not earned user belief. Genuine belief is the actual driver of meaningful retention.
Why Retention-First Thinking Leads Teams Astray
When teams observe retention flattening or declining, they often respond immediately.
- More features are added to increase engagement.
- More notifications are introduced to pull users back.
- More training is rolled out to “drive adoption”.
However, if activation is weak, these efforts rarely address the root cause. Rather than clarifying value, they introduce complexity. Instead of reinforcing belief, they increase user effort. The product becomes more cumbersome when it should be simplified.
As a result, teams often optimise for activity rather than meaningful progress.
Industry data supports this point. Multiple studies show that increasing retention alone does not improve outcomes if the value proposition is unclear. For example, in a survey conducted by Productivity Insights, it was found that while retention increased by 20% after a feature update, the average task completion time did not improve, highlighting that activity does not necessarily equate to progress. Retaining users is ineffective if they do not understand the product’s purpose.
Retention can reinforce users’ commitment to a product, but it does not generate initial belief, which is more closely tied to whether users have experienced the product’s core value. Employing frameworks such as AARRR (Acquisition, Activation, Retention, Revenue, Referral) or HEART (Happiness, Engagement, Adoption, Retention, Task Success) enables teams to systematically assess different phases of the user journey. The AARRR model, for instance, breaks down user interactions into distinct stages, allowing teams to identify whether problems originate during activation or arise later during retention or revenue generation. Similarly, the HEART framework incorporates qualitative dimensions such as happiness and engagement along with adoption and retention, offering a comprehensive view of user experiences and satisfaction. By leveraging these structured methodologies, product teams can more accurately diagnose user engagement challenges and align operational metrics with genuine user belief in the product.
The Question Retention Will Never Answer
According to Atlassian, activation metrics, which measure how many users experience a product’s core value, provide key insights that retention metrics alone cannot offer. Retention metrics remain helpful in understanding ongoing user engagement.
They’re just incomplete. They can’t tell you:
- Whether users feel the product makes their work easier
- Whether it reduces risk or uncertainty
- Whether it fits naturally into how they operate
These answers emerge earlier in the user journey, when the product first demonstrates recognisable value. Therefore, the more helpful question is not “How do we improve retention?”
It’s this: What convinced our most successful users that this product was worth sticking with?
If this answer is unclear, retention metrics only reflect surface-level engagement.
If dashboards appear healthy but adoption remains fragile, the underlying issue is often activation rather than engagement.
This is the type of disconnect often identified during discovery workshops and product audits, where usage is present but user belief has not fully developed.
Reframing the Order of Optimisation
Activation and retention are not competing priorities; they are sequential.
Activation creates belief.
Belief creates momentum.
Momentum creates retention that actually means something.
When teams reverse this order, they optimise for presence rather than progress. At this point, retention may appear positive while the product’s growth stagnates. This stagnation translates into an opportunity cost, where the product underdelivers on its potential returns. A hypothetical scenario illustrates this: if a 10% increase in user belief could capture an additional £500,000 in annual revenue, maintaining the status quo entails significant financial risk. Quantifying these costs can provide a stark reminder of the importance of focusing on meaningful metrics rather than mere presence.
To turn insights into actionable steps, consider conducting a user-belief survey to understand your users’ perceptions of value better. Questions could include: ‘What specific problems does the product solve for you?’ and ‘How confident are you that the product will meet your needs over time?’
Additionally, mapping out activation milestones can help identify when users truly engage with the product’s core features. For example, teams might outline milestones such as ‘completed initial product walkthrough,’ ‘used primary feature for the first time,’ or ‘achieved key user success metric.’
These strategies enable teams to align more closely with genuine user engagement and belief, fostering a more meaningful retention strategy.
Up next in the series
Part 3: Activation Isn’t One-and-Done (But It’s Not Continuous Either)
If activation is about belief, what happens when belief fades, contexts change, or new features appear?