inbound-lead-scoring
Inbound Lead Scoring
Lead scoring assigns a numerical value to each lead based on how well they match your ICP (fit) and how engaged they are with your content and product (behavior). The score determines when a lead is ready for sales (MQL threshold), how leads are prioritized, and which leads get fast-tracked vs nurtured.
The principle: a lead score is a prediction, not a fact. It predicts the likelihood that a lead will become a customer. Build the model from closed-won data, not from intuition. Calibrate it quarterly against actual outcomes.
The Two-Dimension Model
Every lead score is the sum of two independent dimensions. Track them separately, not as a single blended number.
| Dimension | What it measures | Data source | Changes? |
|---|---|---|---|
| Fit score | How well the lead matches your ICP (firmographic + demographic) | Enrichment data, form responses | Rarely. Firmographics are stable |
| Behavior score | How engaged the lead is with your content and product | Website activity, email engagement, form submissions, product usage | Constantly. Resets over time (decay) |
Combined score = Fit score + Behavior score
Why two dimensions matter:
- High fit + low behavior = good prospect, not ready yet. Nurture
- Low fit + high behavior = engaged but wrong profile. Don't pass to sales
- High fit + high behavior = MQL. Pass to sales immediately
- Low fit + low behavior = ignore
The Fit-Behavior Matrix
| Low behavior (0-25) | Medium behavior (26-50) | High behavior (51+) | |
|---|---|---|---|
| High fit (30-50) | Nurture. Good prospect, not engaged yet | Monitor. Getting warm. May MQL soon | MQL. Pass to sales now |
| Medium fit (15-29) | Low priority nurture | Nurture with targeted content | Review. May be worth a call despite imperfect fit |
| Low fit (0-14) | Ignore | Ignore | Do not MQL. Engagement doesn't fix bad fit |
Matrix rules:
- Never MQL a low-fit lead regardless of behavior score. A student downloading every ebook is not a prospect. Fit is a gate, not a tiebreaker
- High-fit leads with zero behavior are still valuable. They just need nurture. Don't discard them because they haven't engaged yet
- The MQL threshold should require BOTH fit ≥ 20 AND behavior ≥ 30 (or your calibrated equivalents). A single blended score hides the difference between "right person, not ready" and "wrong person, very active"
Fit Scoring
Fit scoring evaluates how closely the lead matches your ideal customer profile. It uses firmographic (company) and demographic (person) data, mostly from enrichment.
Fit scoring model
| Category | Criterion | Ideal (+15) | Good (+10) | Acceptable (+5) | Poor (0) | Disqualify (-50) |
|---|---|---|---|---|---|---|
| Company size | Employee count | 50-500 | 20-50 or 500-2000 | 10-20 or 2000-5000 | 5000+ | < 5 (too small) |
| Industry | Vertical | Core ICP vertical | Adjacent vertical | Tangential | Non-ICP | Excluded (gov, edu if not served) |
| Geography | Country/region | Primary market (US) | Secondary (UK, DE, AU) | Tertiary (rest of EU) | Other | Sanctioned countries |
| Seniority | Job level | VP, Director | Manager, Head of | Senior IC | IC, Intern | Student, unemployed |
| Department | Function | Core buyer (Sales, RevOps, Marketing) | Adjacent (Ops, Growth) | Tangential (Product, Eng) | Unrelated (HR, Finance) | N/A |
| Funding stage | Company maturity | Series A-C | Seed or Series D+ | Pre-seed or Public | Bootstrapped (may be fine) | N/A |
| Tech stack | Tools they use | Uses tools you integrate with | Uses competitor tools (switch potential) | No relevant tools | N/A | N/A |
Maximum fit score: ~75 points. A lead scoring 50+ on fit is an ideal match. 30-49 is a good match. Below 30 needs scrutiny.
Fit scoring rules
- Derive fit from enrichment, not form fields. Don't ask the prospect for employee count, industry, and funding stage on a form. Enrich from their email domain using Clearbit, Apollo, or HubSpot Breeze. Let the form capture name, email, and one qualifying question at most
- Disqualify aggressively. Leads from excluded industries, competitors, students, or personal emails with no company should receive a -50 penalty that makes MQL impossible regardless of behavior
- Fit score doesn't decay. A company's size and industry don't change week to week. Set the fit score once (on enrichment) and update only when enrichment data refreshes
- Review fit criteria quarterly. Analyze closed-won deals from the last 4 quarters. If 40% of wins come from a segment you're scoring as "acceptable," increase that segment's score. The model should reflect reality, not assumptions
Behavior Scoring
Behavior scoring measures engagement intensity. Every meaningful interaction adds points. Points decay over time to reflect recency.
Behavior scoring model
| Action | Points | Decay rate | Notes |
|---|---|---|---|
| High-intent actions (instant MQL candidates) | |||
| Demo request form submission | +50 | None | Instant MQL if fit ≥ 20 |
| Pricing page visit (2+ visits in 7 days) | +30 | -5/week | Multiple visits = active evaluation |
| Free trial signup | +40 | -5/week | Product interest. Route to PQL flow |
| Contact sales form | +45 | None | Direct sales intent |
| Medium-intent actions | |||
| Case study page view | +10 | -2/week | Evaluating social proof |
| Comparison page view ([You] vs [Competitor]) | +15 | -3/week | Active comparison shopping |
| Webinar attendance (live) | +15 | -2/week | Invested 30-60 minutes |
| Webinar registration (didn't attend) | +5 | -1/week | Interest but no commitment |
| Content download (ebook, report) | +5 | -1/week | Research phase |
| Low-intent actions | |||
| Blog post visit | +1 | -1/month | Awareness. Don't over-weight |
| Email open | +1 | -1/month | Unreliable (Apple privacy). Minimal weight |
| Email click | +3 | -1/week | More meaningful than open |
| Social media click-through | +2 | -1/week | Light engagement |
| Negative signals | |||
| Email unsubscribe | -20 | None | Active disengagement |
| Spam complaint | -50 | None | Serious negative signal |
| 30 days no activity | -10 | Applied once | Cooling off |
| 60 days no activity | -20 | Applied once | Going cold |
| Visited careers page | -10 | None | Likely a job seeker, not a buyer |
Behavior scoring rules
- Decay is essential. A lead who downloaded an ebook 6 months ago and has done nothing since is not the same as one who downloaded it yesterday. Decay rates ensure the behavior score reflects recent engagement, not historical
- High-intent actions should be near-instant MQL triggers. A demo request from a lead with fit ≥ 20 should trigger MQL regardless of cumulative behavior score. Don't make a prospect who requests a demo also need 50 points from blog visits
- Cap blog visit and email open points. A lead who reads 50 blog posts and opens every email is a researcher or a competitor, not necessarily a buyer. Cap low-intent action points at 15-20 total
- Negative signals matter. An unsubscribe or careers page visit are strong signals of non-buyer intent. Weight them accordingly
- Don't score the same action multiple times in a short window. Visiting the pricing page 10 times in one session is one signal, not 10. Deduplicate actions within a session (30-minute window)
MQL Threshold
The MQL threshold is the combined score at which a lead is passed to sales. Setting it correctly is the most important calibration decision.
How to set the threshold
- Pull the last 50 closed-won deals. For each, calculate what their fit + behavior score would have been at the time they became an MQL (or at the time of their first sales interaction)
- Find the median score. This is your starting threshold
- Test it. Apply the threshold to the last 3 months of leads. How many would have been MQL'd? What percentage were accepted by sales?
- Calibrate for acceptance rate. Target 30-50% MQL acceptance rate (SDR confirms the lead is worth pursuing). Below 30% = threshold is too low (too many junk MQLs). Above 60% = threshold is too high (missing good leads)
Threshold guidelines
| Fit score | Behavior score | Combined | Classification |
|---|---|---|---|
| ≥ 30 | ≥ 40 (including high-intent action) | ≥ 70 | Instant MQL. Route immediately |
| ≥ 20 | ≥ 30 | ≥ 50 | Standard MQL. Route within SLA |
| ≥ 20 | < 30 | < 50 | Not yet. Nurture until behavior catches up |
| < 20 | Any | Any | Do not MQL regardless of behavior |
Threshold rules
- Fit is a gate. A lead must meet a minimum fit score (≥ 20) to be MQL-eligible regardless of behavior. This prevents high-engagement, low-fit leads from flooding sales
- High-intent actions bypass cumulative scoring. A demo request from a fit lead is an MQL. Period. Don't require additional blog-visit points
- Review threshold quarterly. If MQL acceptance rate drops below 30%, raise the threshold. If pipeline from MQLs drops, lower it. The threshold is a dial, not a constant
- Sales feedback loop is mandatory. SDRs must accept or reject every MQL with a reason. Without this data, you can't calibrate. Build MQL acceptance/rejection into the workflow
Score Decay Implementation
Decay models
| Model | How it works | Best for |
|---|---|---|
| Linear decay | Points decrease by a fixed amount per time period (-5 points/week) | Simple. Easy to implement and understand |
| Percentage decay | Points decrease by a percentage per period (20% reduction/month) | Smoother decay. High scores decay faster than low ones |
| Step decay | Points drop at fixed intervals (full value for 30 days, then halved, then zeroed at 90 days) | Event-based engagement (webinars, events) |
| Activity-based reset | Score resets to baseline after N days of no activity | Aggressive. Good for fast sales cycles |
Recommended: Linear decay for most B2B SaaS teams. Simple, predictable, and easy to debug.
Decay rules
- Fit score does NOT decay. Company size and industry don't change
- Behavior score decays on every action type except hard negative signals (unsubscribe, spam complaint)
- High-intent actions decay slower than low-intent actions. A demo request should maintain its value for 2-4 weeks. A blog visit should decay within days
- Set a floor of 0. Scores don't go negative from decay alone (negative signals can still pull the score below 0)
- Run decay calculations daily or weekly, not monthly. Monthly decay creates sudden score cliffs
CRM Implementation
HubSpot
HubSpot has a native "HubSpot Score" property that supports scoring rules with positive and negative criteria.
Settings → Properties → Contact Properties → HubSpot Score
Positive criteria:
+50: Form submission IS "Demo Request"
+30: Page views include "/pricing" >= 2 times
+15: Attended webinar (list membership)
+10: Page views include "/case-studies"
+5: Downloaded content (form submission on content forms)
+3: Clicked marketing email
Negative criteria:
-20: Unsubscribed from emails
-10: Page views include "/careers"
-50: Contact property "Competitor" IS true
HubSpot scoring limitations:
- HubSpot's native scoring doesn't support time-based decay natively. Use a workflow that reduces the score by X points every 30 days for contacts with no recent activity
- HubSpot blends fit and behavior into one score. Create two custom score properties (
ICP Fit ScoreandEngagement Score) via workflows for the two-dimension model. Use the native score as the combined total - HubSpot's scoring criteria are rule-based, not ML-based. Every rule must be manually configured
Salesforce
Salesforce doesn't have native lead scoring. Options:
| Approach | How | Pros | Cons |
|---|---|---|---|
| Marketing automation scoring (Pardot/MCAE, Marketo) | Score in the MA tool, sync to Salesforce | Purpose-built. Handles decay. Rich behavior tracking | Extra tool cost. Sync lag |
| Flow-based scoring | Salesforce Flow calculates score from field values and activities | Native. No extra tool | Complex to build. No behavior tracking without Activity tracking |
| Custom Apex scoring | Custom code calculates score on a schedule | Full control | Requires developer. Maintenance burden |
| Third-party (LeanData, Madkudu) | Predictive scoring tool plugs into Salesforce | ML-based. Often more accurate than rules | Additional cost. Black-box model |
Calibration Process
Quarterly calibration checklist
- Pull MQL acceptance data. What % of MQLs were accepted by SDRs in the last quarter?
- Target: 30-50%. Below 30% → raise threshold or tighten fit criteria. Above 60% → lower threshold
- Pull MQL-to-Opportunity conversion. What % of accepted MQLs became opportunities?
- Target: 30-50%. Below 30% → scoring is passing leads that aren't real opportunities. Tighten behavior criteria
- Analyze false negatives. Pull closed-won deals where the contact was never MQL'd. What score did they have? Why didn't they trigger?
- If common, lower the threshold or add new behavioral triggers
- Analyze false positives. Pull MQLs that were rejected by sales. What scored them high? Which criterion is over-weighted?
- Common culprit: blog visits over-scored, content downloads over-scored
- Check decay effectiveness. Pull leads with high scores but no activity in 60+ days. Are they still showing as MQLs?
- If yes, decay isn't aggressive enough
- Update fit criteria. Compare closed-won firmographics to your fit scoring model. Adjust weights if your customer profile has shifted
Calibration rules
- Calibrate quarterly, not annually. Markets shift. ICP evolves. A model calibrated in Q1 may be wrong by Q3
- Use actual closed-won data, not sales team opinions. "We think Directors should score higher" is a hypothesis. "68% of closed-won contacts were Directors" is data
- Track calibration changes over time. Log what changed, why, and the impact on MQL volume and acceptance rate per quarter
Measurement
| Metric | Definition | Target | Review |
|---|---|---|---|
| MQL volume | MQLs generated per month | Trending up (or stable) | Monthly |
| MQL acceptance rate | Accepted MQLs / total MQLs | 30-50% | Weekly |
| MQL-to-SQL conversion | SQLs / accepted MQLs | 50-70% | Monthly |
| MQL-to-Opportunity conversion | Opportunities / MQLs | 15-25% | Monthly |
| False positive rate | Rejected MQLs / total MQLs | < 40% | Monthly |
| False negative rate | Closed-won deals that were never MQL'd / total closed-won | < 10% | Quarterly |
| Score distribution | Distribution of scores across all leads | Bell curve centered below MQL threshold | Quarterly |
| Time to MQL | Average days from lead creation to MQL | < 30 days for content leads | Monthly |
| Decay effectiveness | % of leads with high score + no activity in 60 days | < 5% | Monthly |
Pre-Launch Checklist
- [ ] Fit scoring criteria defined based on closed-won analysis (not intuition)
- [ ] Behavior scoring actions and point values defined
- [ ] Decay rates set for each behavior action
- [ ] MQL threshold set based on historical data
- [ ] Minimum fit gate defined (leads below fit threshold can never MQL)
- [ ] High-intent instant-MQL triggers defined (demo request, pricing page, contact sales)
- [ ] Negative scoring signals defined (unsubscribe, careers page, competitor)
- [ ] Scoring implemented in CRM or marketing automation
- [ ] MQL acceptance/rejection workflow built (SDR must accept or reject with reason)
- [ ] Calibration cadence set (quarterly) with owner assigned
- [ ] MQL acceptance rate report built and scheduled weekly
- [ ] Sales team trained on what the score means and what to expect from MQL leads
Anti-Pattern Check
- Single blended score with no fit/behavior separation. A score of 60 could mean "perfect fit, barely engaged" or "terrible fit, downloaded everything." These require completely different actions. Track fit and behavior separately
- Blog visits worth 10 points each. A prospect reads 5 blog posts and is halfway to MQL without any buying intent. Cap low-intent actions at 15-20 total points. Don't let content consumption alone trigger MQL
- No score decay. A lead who was active 8 months ago still shows as a high-scoring MQL. Implement decay. Behavior scores should approach zero after 90 days of no activity
- MQL threshold set by gut feel. "50 points sounds right" is not calibration. Pull closed-won data. Find the score those contacts had at the MQL stage. Set the threshold from data
- No negative scoring. A lead who unsubscribes, visits the careers page, or is identified as a competitor still accumulates positive points. Add negative signals to the model
- No sales feedback loop. MQLs go to sales with no acceptance/rejection tracking. Without SDR accept/reject data, you can't measure false positive rate or calibrate the threshold
- Scoring model hasn't been calibrated in 12 months. Your ICP has evolved. Your content has changed. Your traffic patterns have shifted. The model from a year ago is probably wrong. Calibrate quarterly
- High-intent actions not treated as instant MQL triggers. A lead with fit ≥ 20 who requests a demo still needs to accumulate 50 behavior points from blog visits before becoming an MQL. Demo requests should bypass cumulative scoring. Instant MQL if fit passes