A retention curve plots the percentage of a cohort still active at each period after signup. Its shape, not its slope, tells you what is broken. A curve that flattens at 35% means product/market fit. A curve that drops to zero by week 8 means an activation problem. A curve that bends back upward means network effects are kicking in. This guide names six shapes you will actually encounter, what causes each, the metric to confirm the diagnosis, and the lever that moves the curve.

How do you read a retention curve?

Reading a retention curve means looking at three properties in order: the first-period drop, the slope of the decline, and whether the curve flattens. Together these three properties separate every healthy product from every unhealthy one.

  1. First-period drop. How much of the cohort is gone by Week 1 (consumer) or Day 30 (B2B)? A drop steeper than 60% almost always points to activation, not product quality.
  2. Decline slope. Between the early drop and the long-tail, how fast are users leaving? A linear slow-bleed and a fast exponential decay require different fixes.
  3. Asymptote. Does the curve flatten? At what height? Brian Balfour calls a flattening curve the third checkpoint of product/market fit. Andrew Chen calls it the single most reliable PMF signal in consumer.

Pair the curve with a DAU/MAU stickiness ratio and a power-user histogram (L7 or L30) so you can tell whether the people who stuck around are casual or core. Amplitude's retention guide walks through both views.

What are the 6 retention curve shapes you'll actually see?

Six patterns cover roughly every retention curve a B2B or consumer product will produce. The table below maps each shape to its underlying cause, the metric you should inspect to confirm the diagnosis, and the primary lever that moves it. Each shape gets its own deep-dive section below.

# Shape What it means Inspect this metric Lever that moves it
1 Flattening Product/market fit for a segment Asymptote height by cohort Onboard more of the right users
2 Smiling Resurrection outpacing churn (network effect) Resurrected users / total cohort Strengthen viral and lifecycle loops
3 Immediate-drop Activation failure Day-1 to Day-7 retention Onboarding redesign, faster time-to-value
4 Slow-bleed No habit, wrong-fit segment, or weak core loop Sessions per active user trend Re-engagement triggers, ICP tightening
5 Hockey-stick Compounding network effects Magnitude of weekly cohort improvement Invest in supply, content, or social loops
6 Double-hump Cyclical or seasonal use case Days-since-last-active distribution Calendar-aware re-engagement

The first three are diagnostic for whether the product works. The last three are diagnostic for how the product works. You need both before you choose a fix.

What is a smiling retention curve?

A smiling retention curve is one that drops, levels, and then bends back upward over time. It happens when resurrection (dormant users returning) starts to outpace churn in older cohorts. Brian Balfour and Lenny Rachitsky describe rising-dipping-rising retention as a rare escape-velocity signal.

Underlying causes

  • Network effects pulling users back (more friends on the platform, more supply, more content).
  • Lifecycle marketing maturing (better win-back campaigns, calendar triggers).
  • Use case expansion (the product solves more problems for the same user over time).

Metric to confirm

Resurrected users as a share of the cohort, plotted week over week. In Sequoia's retention essay, this is the moment older cohorts achieve balance between resurrection and churn, then tip past it.

Lever that moves it

Double down on whatever loop is pulling users back. If it is social proof, ship more sharing surfaces. If it is content, ship more content. Smiling curves are not engineered, they are amplified.

What does a flattening retention curve indicate?

A flattening retention curve indicates that a stable segment of your cohort has found durable value and is unlikely to churn further. It is the canonical signal of product/market fit. Andrew Chen lists "cohort retention curves that flatten" as the first of his ten magic PMF metrics.

Underlying causes

  • A core user behaviour has become habitual.
  • Self-selection: high-intent users stayed, low-intent users churned out early.
  • The product solves a recurring problem the user faces on a natural cadence.

Metric to confirm

The asymptote height by cohort. Compare last quarter's flattened line to this quarter's. If the new cohorts flatten higher, your activation work is paying off. Lenny Rachitsky's benchmark study with Casey Winters suggests Consumer SaaS at ~40% six-month retention is good, ~70% is great. SMB/Mid-Market SaaS: 60% good, 80% great. Enterprise SaaS: 70% good, 90% great.

Lever that moves it

Do not change the product. Change the mix of users you acquire. Find the segment whose curve flattens highest and double acquisition spend toward look-alikes.

Good vs Great 6-Month Retention by Business Model
Consumer Social (good)
25%
Consumer Social (great)
45%
Consumer Subscription (good)
40%
Consumer Subscription (great)
70%
SMB/Mid-Market SaaS (good)
60%
SMB/Mid-Market SaaS (great)
80%
Enterprise SaaS (good)
70%
Enterprise SaaS (great)
90%
Source: Lenny Rachitsky & Casey Winters, What Is Good Retention Benchmark Study

What does an immediate drop in retention mean?

An immediate-drop retention curve loses 60-80% of the cohort in the first session or first week, then approaches zero. It is almost always a guidance failure, not a product failure. According to SaaS Factor, 40-60% of users never come back after their first session because onboarding asks them to work before they experience value.

Underlying causes

  • Time-to-first-value is too long.
  • Setup requires data, integrations, or invites the user is not ready to provide.
  • The promise on the landing page does not match the first-run experience.

Metric to confirm

Day-1 to Day-7 retention for sign-up cohorts, segmented by acquisition source. In mobile, average retention drops from 26.5% on Day 1 to roughly 12% by Day 7 (digia.tech). Anything materially worse is an onboarding bug, not a market signal.

Lever that moves it

Redesign for speed-to-aha. Productfruits research found personalised onboarding flows lift activation 30-50% over generic ones. Shorten setup, defer optional steps, and ship a guided first action that ends with the user experiencing the core value.

What does a slow-bleed retention curve mean?

A slow-bleed retention curve declines steadily after the first 30-90 days and never flattens. Each cohort eventually goes to zero. Reforge calls this "the silent killer" because it hides behind healthy top-of-funnel growth for quarters before showing up in revenue.

Underlying causes

  • The product solves a one-time problem, not a recurring one.
  • You have acquired the wrong ICP and they slowly realise the fit is poor.
  • The core engagement loop is too weak to form a habit.
  • Competitors are matching your value with lower switching cost.

Metric to confirm

Sessions per active user trended over cohort age. If the count is dropping for the same user month-over-month, engagement is decaying before they leave. Pair with gross revenue retention in B2B. SaaS Capital's 2026 benchmarks show median bootstrapped SaaS NRR at 103%, with top-quartile near 118%. If you are well below median and not flattening, you have a slow-bleed.

Lever that moves it

Three options, in order: tighten ICP and stop selling to bad-fit segments; introduce a recurring trigger (calendar event, scheduled report, lifecycle email); or add an expansion product the same buyer needs at a different cadence.

Median Net Revenue Retention by SaaS Segment (2026)
SMB (<$25K ACV)
97%
Mid-Market ($25K-$100K)
108%
Enterprise (>$100K ACV)
118%
Top Quartile (all)
130%
Source: SaaS Capital 2026 Benchmarking Metrics

What is a hockey-stick retention curve?

A hockey-stick retention curve stays flat or slightly down for a long stretch, then bends sharply upward as new behaviour kicks in. It is the engagement equivalent of viral growth, and it is rare. Companies like Facebook, Instagram, and Uber showed this pattern during the Hyper Growth phase described in Sequoia's Evolution of a Product.

Underlying causes

  • A self-reinforcing network effect crossed an inflection threshold.
  • A viral coefficient ticked above one.
  • A new feature unlocked a use case for existing users (Slack adding huddles, Notion adding databases).

Metric to confirm

Magnitude of week-over-week cohort improvement, not absolute retention. Compare cohort N's Week-12 retention to cohort N-12's Week-12 retention. If the delta is widening, you are bending toward a hockey-stick, not just having a good month.

Lever that moves it

Identify the loop that produced the inflection (referrals, content, supply density), then make it the company's number-one investment. Hockey-sticks are not sustained by feature breadth, they are sustained by compounding the single loop that caused the bend.

What is a double-hump retention curve?

A double-hump retention curve drops, flattens, drops again, and flattens again at a lower plateau. The shape comes from cyclical use cases: tax software, fitness apps, travel booking, payroll. Users disappear between cycles and return for the next one.

Underlying causes

  • Natural seasonality (annual, quarterly, or weekly cycles).
  • B2B tools used only when a workflow triggers (compliance, hiring, procurement).
  • Multi-product apps where users move between primary and secondary use cases.

Metric to confirm

The days-since-last-active distribution. If you see clusters at 7, 30, or 90 days rather than a smooth decay, the use case is cyclical. A weekly L7 power-user histogram, as described in Andrew Chen's Power User Curve, exposes this clearly.

Lever that moves it

Do not fight the cycle. Build calendar-aware re-engagement. Trigger the win-back exactly when the next cycle begins, not on a fixed cadence. Add adjacent jobs-to-be-done that fill the trough between humps so the user has a reason to return mid-cycle.

How do you fix a declining retention curve?

Fixing a declining retention curve is a four-step diagnostic, not a single playbook. The fix depends entirely on where in the curve the decline lives.

  1. Segment by cohort age. Plot Day 1, Day 7, Day 30, Day 90 retention separately. The largest drop tells you which lever to pull.
  2. Segment by acquisition source. Paid social, organic, and referral cohorts almost never share the same shape. The worst-performing source is often the easiest fix (turn it off).
  3. Find the aha moment. What single action correlates with users who survive past the first plateau? Amplitude's retention model guide shows how to isolate the activation event statistically.
  4. Apply the right lever for the shape. Immediate-drop = onboarding rebuild. Slow-bleed = ICP tightening or recurring trigger. Flat-but-low = acquisition mix change. Use the table earlier in this article as the cheat sheet.

One anti-pattern: chasing average retention as a single number. Reforge and Casey Winters' Retention + Engagement program both stress that average retention hides the cohort and segment behaviour that actually contains the fix.

Curve shapeDiagnostic signalLikely root causeLever to pull
SmilingCurve bends back upward over timeNetwork effects or lifecycle loops resurrecting usersAmplify the loop driving resurrection
FlatteningCurve plateaus at a stable percentageProduct/market fit for a segmentAcquire more look-alikes of the surviving segment
Immediate-drop60%+ loss in first session or weekActivation failure, slow time-to-valueRebuild onboarding for speed-to-aha
Slow-bleedLinear decline that never flattensWeak habit loop or wrong-fit ICPTighten ICP or add recurring triggers
Hockey-stickCurve bends sharply upward at an inflectionCompounding network or content loopInvest in the single loop that caused the bend
Double-humpTwo plateaus separated by a dropoffCyclical or seasonal use caseCalendar-aware re-engagement at next cycle