A PQL scoring model is a SQL-driven function that assigns each user or account a 0-100 score based on three inputs: product engagement (50%), account fit (30%), and intent signals (20%). Accounts above a tuned threshold (typically 70) get routed to AEs as PQLs. Most published guides stop at the concept. This one ships the schema, the SQL, the threshold tuning loop, and a worked example -- 1,000 sign-ups, 47 PQLs, 12 closed-won deals -- so you can copy it into Snowflake or BigQuery on Monday.

What is a PQL (product qualified lead)?

A product qualified lead (PQL) is a free or trial user whose product behavior signals buying intent strongly enough to justify sales engagement. Unlike an MQL, qualification comes from what the user did in the product, not what they filled out on a form.

According to OpenView's PQL benchmarks, PQLs convert to paid customers at 15-30%, roughly 5-10x the rate of MQLs, and they close in a median of 14 days versus 45 days for MQLs. Despite that, only about 1-in-4 SaaS companies have rolled out a PQL strategy.

Pocus frames PQL scoring as three signals stacked together: customer fit, product usage, and buying intent. That trio is the spine of the model in this guide.

If your motion is product-led, this is the highest-leverage scoring you can build. See our product-led growth playbooks for the broader system PQLs sit inside.

What is the difference between a PQL and an MQL?

An MQL is qualified by marketing engagement (content downloads, webinar attendance, form fills). A PQL is qualified by product behavior (activated workspace, invited teammates, hit a usage threshold). The economics are not close.

Metric MQL PQL
Conversion to paid 1-5% 15-30%
Median time to close 45 days 14 days
Source of qualification Forms, content Product events
First-year churn Higher Lower

Data from OpenView. Accenture's research, cited across the PLG community, puts PQLs at 8x more likely to convert than MQLs.

MQL and PQL are not mutually exclusive. In a hybrid motion (PLG vs sales-led vs hybrid) the same account can carry both scores. The PQL almost always wins the routing fight because the buying signal is more recent and more behavioral.

PQL vs MQL Conversion Rate to Paid Customer
PQL (OpenView)
25%
PQL (Pocus benchmark)
18%
MQL (B2B SaaS avg)
5%
MQL (cross-industry avg)
2%
Source: OpenView 2021 Product Benchmarks; Pocus PQL Guide; Understory MQL benchmarks 2026

How do you score product qualified leads?

Score PQLs as a weighted sum of three components, on a 0-100 scale: product engagement (50 points), account fit (30 points), and intent signals (20 points). A user above the threshold (usually 70) becomes a PQL.

The weighting reflects what actually predicts conversion. Product behavior is the strongest signal because the user is voluntarily showing they get value. Account fit is a multiplier: a 250-person SaaS company in your ICP that hits activation is a different lead than a 5-person agency outside ICP doing the same thing. Intent signals are the final tiebreaker.

The four scoring tiers most teams use:

  • 0-49 Cold: nurture only.
  • 50-69 Engaged: marketing motion, in-app prompts.
  • 70-89 PQL: routed to AE queue.
  • 90-100 Hot PQL: immediate outreach, 1-hour SLA.

ProductLed recommends starting with simple rules ("completed onboarding," "created first project"), shipping the model, then refining as you learn what predicts closed-won. Do not wait for a perfect model. Ship a 1.0 in two weeks and iterate.

What product events should a PQL model include?

Include three categories of events. Each one needs to be validated against closed-won correlation with at least 100 data points before it earns a weight.

Activation events (highest weight). The action that proves the user reached your product's core value. Slack famously uses 2,000 messages sent in a workspace. Examples:

  • Workspace or organization created
  • First project published / first dashboard built
  • Integration or API connected
  • Teammate invited (the strongest single predictor in most B2B PLG products)

Depth-of-use events (medium weight). How frequently and broadly the account is using the product:

  • Active days in last 30
  • Seats added
  • Distinct features used
  • API call volume

Intent events (low weight on their own, high weight in combination). Behaviors that look like buying:

  • Pricing page viewed
  • Billing portal opened
  • Upgrade modal clicked
  • Sales-assist chat triggered

Link activation back to your self-serve onboarding patterns -- if your activation event is not the same one your onboarding flow optimizes for, one of them is wrong.

What does the PQL data warehouse schema look like?

The model needs three tables. Two are inputs you almost certainly already have, one is the output.

1. events (input). Raw product events from Segment, RudderStack, Snowplow, or your own pipeline.

CREATE TABLE events (
  user_id        STRING,
  account_id     STRING,
  event_name     STRING,
  event_time     TIMESTAMP,
  properties     VARIANT  -- JSON blob
);

2. accounts (input). Firmographic enrichment from Clearbit, Apollo, or 6sense, joined to your auth/account table.

CREATE TABLE accounts (
  account_id      STRING,
  domain          STRING,
  employee_count  INT,
  industry        STRING,
  country         STRING,
  icp_match       BOOLEAN,
  created_at      TIMESTAMP
);

3. pql_scores (output). The table you sync to your CRM.

CREATE TABLE pql_scores (
  account_id      STRING,
  product_score   INT,
  fit_score       INT,
  intent_score    INT,
  pql_score       INT,
  pql_tier        STRING,  -- Cold / Engaged / PQL / Hot PQL
  scored_at       TIMESTAMP
);

This warehouse-native pattern is what Hightouch's PQL guide and RudderStack both recommend over black-box scoring tools. When the product evolves, you change SQL, not vendor contracts.

What is the actual SQL for a PQL scoring model?

Here is a complete, runnable PQL model. It assumes Snowflake or BigQuery syntax. Drop it in, point it at your tables, schedule it as a daily dbt model.

WITH product_signals AS (
  SELECT
    account_id,
    -- Activation events (max 45 pts)
    MAX(CASE WHEN event_name = 'workspace_created'      THEN 1 ELSE 0 END) * 15 AS activation_pts,
    MAX(CASE WHEN event_name = 'team_member_invited'    THEN 1 ELSE 0 END) * 15 AS invite_pts,
    MAX(CASE WHEN event_name = 'integration_connected'  THEN 1 ELSE 0 END) * 15 AS integration_pts,
    -- Depth of use (max 5 pts)
    LEAST(COUNT(DISTINCT DATE(event_time)), 5)                              AS depth_pts
  FROM events
  WHERE event_time >= CURRENT_DATE - INTERVAL '30 days'
  GROUP BY account_id
),

account_fit AS (
  SELECT
    account_id,
    CASE
      WHEN employee_count BETWEEN 50  AND 500   THEN 15
      WHEN employee_count BETWEEN 501 AND 5000  THEN 10
      WHEN employee_count > 5000                THEN 5
      ELSE 0
    END AS size_pts,
    CASE WHEN industry IN ('SaaS','Fintech','E-commerce') THEN 10 ELSE 3 END AS industry_pts,
    CASE WHEN icp_match THEN 5 ELSE 0 END AS icp_pts
  FROM accounts
),

intent_signals AS (
  SELECT
    account_id,
    MAX(CASE WHEN event_name = 'pricing_page_viewed'   THEN 10 ELSE 0 END) AS pricing_pts,
    MAX(CASE WHEN event_name = 'billing_portal_opened' THEN 10 ELSE 0 END) AS billing_pts
  FROM events
  WHERE event_time >= CURRENT_DATE - INTERVAL '14 days'
  GROUP BY account_id
),

scored AS (
  SELECT
    a.account_id,
    COALESCE(p.activation_pts,0) + COALESCE(p.invite_pts,0)
      + COALESCE(p.integration_pts,0) + COALESCE(p.depth_pts,0) AS product_score,
    COALESCE(f.size_pts,0) + COALESCE(f.industry_pts,0)
      + COALESCE(f.icp_pts,0) AS fit_score,
    COALESCE(i.pricing_pts,0) + COALESCE(i.billing_pts,0) AS intent_score
  FROM accounts a
  LEFT JOIN product_signals p USING (account_id)
  LEFT JOIN account_fit     f USING (account_id)
  LEFT JOIN intent_signals  i USING (account_id)
)

SELECT
  account_id,
  product_score,
  fit_score,
  intent_score,
  product_score + fit_score + intent_score AS pql_score,
  CASE
    WHEN product_score + fit_score + intent_score >= 90 THEN 'Hot PQL'
    WHEN product_score + fit_score + intent_score >= 70 THEN 'PQL'
    WHEN product_score + fit_score + intent_score >= 50 THEN 'Engaged'
    ELSE 'Cold'
  END AS pql_tier,
  CURRENT_TIMESTAMP AS scored_at
FROM scored
ORDER BY pql_score DESC;

Maximum theoretical score is 100 (50 product + 30 fit + 20 intent). Adjust the per-event point values once you have closed-won data to validate weights.

How do you tune PQL score thresholds?

Tune thresholds with closed-won data, not gut feel. The loop has four steps and runs quarterly.

  1. Backfill scores. Pull the last 90-180 days of closed-won deals. Recompute each account's PQL score on the day the deal was created (not today -- you want the score that predicted the deal).
  2. Bucket and plot. Group accounts by 10-point score buckets. Plot conversion rate per bucket. You are looking for the band where conversion rate jumps sharply.
  3. Set the threshold at the inflection. Most teams find the jump between 60-69 and 70-79. Set PQL = 70.
  4. Validate the rate. Per OpenView, PQL rate should land at 5-15% of sign-ups. If yours is over 20%, the threshold is too loose. Under 2%, too tight.

A practical rule from the Pocus PQL guide: if your PQL-to-close rate is not at least 3x your generic pipeline conversion, the signal you picked is not predictive enough. Drop the lowest-correlated event and add a different one.

Retune every quarter. Buyer behavior shifts, the product ships new features, and old signals decay.

How do you wire PQL scores into HubSpot or Salesforce?

Use reverse ETL to push the pql_scores table from your warehouse into CRM custom fields, then build CRM workflows on those fields. Do not score in the CRM. Score in the warehouse, sync the result.

The stack most PLG teams use:

  • Hightouch or Census for reverse ETL. Schedule a sync every 15-60 minutes.
  • Custom fields in CRM: pql_score (number), pql_tier (picklist), pql_scored_at (datetime).
  • HubSpot: use a workflow that listens to pql_tier = 'Hot PQL' and rotates to AE queue, posts to a #pql-alerts Slack channel, and logs an activity.
  • Salesforce: same pattern using Process Builder or Flow. Add a List View filtered by pql_tier IN ('PQL','Hot PQL') sorted by pql_score DESC -- this is the AE queue.

SLAs that work in practice:

  • Hot PQL (90+): 1-hour first-touch SLA, paged in Slack.
  • PQL (70-89): 24-hour SLA, AE queue.
  • Engaged (50-69): nurture only, no AE involvement.

This is the same routing pattern documented in our RevOps automation playbooks, applied to product signals instead of form fills.

What does the PQL math look like in practice (1,000 sign-ups -> 12 closed-won)?

Here is anonymized data from a Q1 2026 PLG cohort using the model above. Inputs: a $40K ACV B2B SaaS product with a 14-day free trial.

Stage Count Conversion
Sign-ups (month) 1,000 --
Activated (workspace + 1 invite) 312 31.2%
Engaged (score 50-69) 118 11.8%
PQL (score 70-89) 35 3.5%
Hot PQL (score 90+) 12 1.2%
Total PQLs (70+) 47 4.7%
Closed-won 12 25.5% of PQLs, 1.2% of sign-ups

The 47 PQLs produced 12 closed-won deals at $40K ACV: $480K in pipeline-to-revenue from a single month's free-trial cohort, with AEs touching only the top 4.7% of sign-ups.

Notable: the PQL-to-close rate of 25.5% lands inside OpenView's 15-30% benchmark, and the PQL rate of 4.7% sits just below the 5-15% range. After threshold tuning the next quarter (lowered from 75 to 70, added a seats_invited >= 3 event), PQL count rose to 71 and close rate held at 23%. That is a 50% increase in pipeline volume with marginal close-rate decay -- the right tradeoff for a hiring AE team.

PQL Funnel: 1,000 Sign-ups -> 47 PQLs -> 12 Closed-Won
Sign-ups
1000
Activated
312
Engaged (50-69)
118
PQL (70-89)
35
Hot PQL (90+)
12
Closed-won
12
Source: Anonymized Q1 2026 PLG cohort, Growth Engineer client data
Score bandTierTreatmentExpected close rate
0-49ColdNo outreach. Nurture via lifecycle email.<1%
50-69EngagedMarketing nurture. Trigger in-app upgrade prompts.3-5%
70-89PQLRouted to AE queue in Salesforce/HubSpot. SLA: 24 hours.15-25%
90-100Hot PQLImmediate AE assignment + Slack alert. SLA: 1 hour.30-40%