general icp-account-modeling

icp-account-modeling

This skill should be used when the user asks to "model ICP accounts", "build an ICP account model", "score account fit", "model accounts against ICP", "build an account scoring model", "rate accounts on ICP fit", "create an account fit model", "quantify account fit", "build a data-driven account model", or any variation of quantitatively scoring and modeling accounts against ICP criteria for B2B SaaS targeting.
Download .md

ICP Account Modeling

ICP account modeling scores every company in your pipeline, CRM, or target list on how closely it matches your ideal customer profile. The output is a numerical fit score per account that drives prioritization, routing, ABM tiering, and resource allocation. Instead of "this looks like a good account" (subjective), the model produces "this account scores 82/100 on ICP fit" (objective, comparable, actionable).

The principle: the model should predict which accounts are most likely to buy, retain, and expand. Build it from closed-won data, validate against churn data, and update quarterly. A model built from intuition rather than data will drift from reality within 6 months.

The Account Scoring Model

Scoring dimensions

Dimension Weight What it measures Data source
Company size 20% Employee count within ICP range Enrichment (Apollo, Clearbit)
Industry 20% Vertical match to ICP verticals Enrichment
Funding stage 15% Investment stage alignment Crunchbase, enrichment
Geography 10% HQ in serviceable market Enrichment
Tech stack 15% Uses tools you integrate with or compete with Job postings, BuiltWith, enrichment
Growth trajectory 10% Headcount or revenue growth rate LinkedIn, enrichment
GTM motion 10% Sales-led, PLG, or hybrid alignment Inferred from team composition and hiring

Scoring rubric

For each dimension, score 0-3:

Score Meaning Criteria example (company size, ICP = 50-500)
0 No fit. Outside ICP entirely < 10 or > 5,000 employees
1 Marginal fit. Edge of ICP 10-49 or 501-1,000 employees
2 Good fit. Within ICP range 50-200 or 201-500 employees
3 Ideal fit. Sweet spot 100-300 employees (the range where you close fastest and churn least)

Calculating the weighted score

ICP Score = (size_score × 0.20) + (industry_score × 0.20) +
            (funding_score × 0.15) + (geo_score × 0.10) +
            (tech_score × 0.15) + (growth_score × 0.10) +
            (gtm_score × 0.10)

Normalize to 0-100 scale:
  Raw max = 3.0 (every dimension = 3)
  ICP Score (0-100) = (raw_score / 3.0) × 100

Tier assignment

ICP Score Tier Action
80-100 Tier 1 (Ideal) ABM 1-to-1 or 1-to-few. Highest priority outbound
60-79 Tier 2 (Good) Standard outbound. May qualify for 1-to-few ABM
40-59 Tier 3 (Acceptable) Lower priority outbound. Volume play
20-39 Tier 4 (Poor) Nurture only. Don't spend sales time
0-19 Not ICP Disqualify. Remove from outbound lists

Building the Model from Data

Step 1: Define weights from closed-won analysis

Pull 50+ closed-won deals. For each dimension, calculate the win rate by score level.

Example: Company size analysis

Size band Won deals Lost deals Win rate Score assignment
1-20 2 18 10% 0
21-50 5 15 25% 1
51-200 22 8 73% 3
201-500 15 7 68% 2
501-1000 4 12 25% 1
1000+ 2 20 9% 0

The dimension with the largest win-rate spread between best and worst segments gets the highest weight. If company size has a 63pp spread (73% - 10%) and geography has a 20pp spread, size is weighted higher.

Step 2: Validate against churn data

Apply the scoring model to existing customers. High-scoring accounts should churn less.

ICP tier (at time of purchase) Churn rate (12 months) NRR
Tier 1 (80-100) 4% 125%
Tier 2 (60-79) 10% 108%
Tier 3 (40-59) 22% 95%
Tier 4 (< 40) 40% 75%

If Tier 1 accounts don't churn significantly less than Tier 3, the model isn't predicting retention. Adjust weights.

Step 3: Automate scoring in CRM

On account creation or enrichment update:
  → Query enrichment fields (size, industry, funding, geo, tech, growth)
  → Apply scoring rubric per dimension
  → Calculate weighted score (0-100)
  → Assign ICP tier
  → Store: icp_score, icp_tier, icp_last_scored

Step 4: Quarterly calibration

  • Re-run closed-won analysis with new data
  • Check if Tier 1 still wins at 2x+ Tier 3 rate
  • Check if Tier 1 still retains at 2x+ Tier 3 rate
  • Adjust weights and score thresholds if needed

Negative Scoring (Anti-ICP)

Some attributes should disqualify an account regardless of positive fit.

Anti-ICP attribute Penalty Why
Competitor company -100 (instant disqualify) Not a real prospect
Government / education (if not served) -50 Different buying process. Different requirements. Low close rate
Company < 5 employees -40 Too small to need your product. Can't afford it
No relevant department (e.g., no sales team for a sales tool) -30 No buyer for your product at this company
Sanctioned country -100 (instant disqualify) Can't sell
Known bad-fit vertical (from churn data) -30 Historical evidence of poor fit

Where the ICP Score Gets Used

Function How the score is used
List building Filter for Tier 1-2 accounts only. Don't build lists of Tier 4 accounts
Lead routing Tier 1 → senior AE or ABM. Tier 2 → SDR outbound. Tier 3 → automated nurture
Lead scoring ICP fit score is 50% of total lead score (fit + behavior)
ABM tiering Tier 1 → 1-to-1. Tier 2 → 1-to-few. Tier 3 → 1-to-many
Pipeline review Flag deals below Tier 2 in pipeline review. Question whether sales time is justified
Forecasting Weight pipeline by ICP tier. Tier 1 pipeline is more reliable than Tier 3
Expansion targeting Score existing customers. High-ICP customers are expansion candidates

Look-Alike Modeling

Use the ICP model to find new accounts that resemble your best customers.

Process

1. Score your top 20 customers on all ICP dimensions
2. Calculate the average score profile:
   Size: 2.7, Industry: 3.0, Funding: 2.3, Geo: 3.0, Tech: 2.1
3. Search for accounts with similar profiles:
   Sales Nav, Apollo, or Crunchbase filtered to match each dimension
4. Score the results with the ICP model
5. Tier 1-2 matches become your look-alike target list

Look-alike rules

  • Look-alikes from closed-won data, not from wishful thinking. Model from your actual best customers, not from "the kind of company we wish would buy"
  • Update the look-alike profile quarterly. As your customer base evolves, the "average best customer" profile changes. A profile from 12 months ago may not represent today's ICP
  • Look-alikes are a starting point, not a guarantee. A company that looks like your best customer still needs a signal (timing) to become a real prospect. Look-alike + signal = qualified target. Look-alike alone = nurture

Measurement

Metric Definition Target Frequency
Scoring coverage % of accounts in CRM with an ICP score > 90% Monthly
Win rate by tier Close rate for each ICP tier Tier 1 should be 2x+ Tier 3 Quarterly
Churn rate by tier Churn rate for each tier Tier 1 should be < 50% of Tier 3 Quarterly
Pipeline concentration % of pipeline from Tier 1-2 > 70% Monthly
Scoring accuracy Do predictions match outcomes? Track trend Quarterly
Model drift Have the best-performing segments shifted since last calibration? Identify and update Quarterly

Anti-Pattern Check

  • Model built from intuition, not data. "I think mid-market SaaS is our ICP" is a hypothesis. "73% of closed-won deals are 51-200 employee SaaS companies" is a model. Use data
  • No negative scoring. A government agency with 3 matching dimensions scores 45/100 and enters the pipeline. Without negative signals, bad-fit accounts pollute the pipeline. Add anti-ICP penalties
  • Scoring but never using the score. The icp_score field exists in CRM but nobody looks at it. The score should drive routing, ABM tiering, lead scoring, and pipeline review. If it's not used, it's wasted
  • Same model for 2 years. Markets shift. Products evolve. The ICP from 2 years ago may not predict today's best customers. Calibrate quarterly
  • Over-weighting one dimension. "Industry is 60% of the score" because all your wins are in SaaS. But if 90% of your prospects are also SaaS, industry doesn't discriminate between fit and non-fit. Weight by predictive power (win-rate spread), not by frequency
  • Scoring without enrichment data. 40% of accounts have blank industry and employee count fields. Their scores are wrong. Enrich first, score second
Want agents that use skill files like this?
We customize skill files for your brand voice and methodology, then run content agents against them.
Book a call