general aeo-content-audit

aeo-content-audit

This skill should be used when the user asks to "audit content for AI search", "check if pages rank in ChatGPT", "run an AEO audit", "evaluate content for AI engines", "audit pages for Perplexity", "check AI search readiness", "assess content for answer engines", "review pages for AI citations", or any variation of auditing existing website content for answer engine optimization readiness.
Download .md

AEO Content Audit

An AEO content audit evaluates how well your existing pages are structured for AI search engines to extract, cite, and recommend. It is not an SEO audit. SEO audits check rankings, backlinks, and technical health. AEO audits check extractability — can an AI engine read your page and lift a clean, accurate, citation-worthy answer?

Run an AEO audit before creating new content. Most sites have 50-100 existing pages that could rank in AI search with structural changes. Fixing existing pages is faster and higher-ROI than publishing new ones.

Audit Scope

What to audit

Prioritize pages by AI search potential, not by organic traffic.

Priority Page type Why
1 Comparison pages (vs, alternatives) Highest volume AI queries for SaaS
2 Category / definition pages AI engines need clean category definitions
3 How-to guides and tutorials Step-by-step content is highly extractable
4 Pricing pages "How much does X cost?" is a top AI query
5 Feature / product pages Build entity authority
6 Blog posts (top 20 by traffic) May already rank — optimize for extraction

Skip: Press releases, event pages, company news, career pages. AI engines rarely cite these.

Audit batch size

  • First audit: top 20 pages by priority
  • Ongoing: audit 10 pages/month on a rolling schedule
  • Trigger re-audit: when AI search tools show citation drops

The AEO Audit Scorecard

Score each page on 10 criteria. Each criterion is 0-2 points. Maximum score: 20.

# Criterion 0 (Fail) 1 (Partial) 2 (Pass)
1 Answer-first Answer buried below fold or absent Answer present but after 100+ words Direct answer in first 50 words
2 Question-shaped H2s Generic or clever H2s Some H2s match queries All H2s match real buyer queries
3 Declarative language Hedged, passive, vague Mixed hedged and declarative Confident, specific, extractable
4 Tables No tables 1 table but comparison data in prose elsewhere All comparison data in tables
5 Structured data No schema markup Basic Article only Page-type-specific schema (FAQPage, HowTo, Product)
6 Author + date No author, no date Author OR date present Author + datePublished + dateModified all present
7 Recency Last updated 12+ months ago Updated 6-12 months ago Updated within 6 months
8 Entity consistency Brand name varies across page Mostly consistent Perfectly consistent brand name
9 Content depth Thin or generic (< 500 words, no original insight) Adequate coverage Comprehensive with proprietary data or unique POV
10 Accessibility Content gated, JS-rendered only, or image-based Partially accessible Fully crawlable HTML, no gates, no JS dependency

Score interpretation

Score Rating Action
16-20 AEO-ready Monitor citations. Minor tweaks only
11-15 Fixable Prioritize structural fixes. 2-4 hours of work per page
6-10 Major rework Likely needs a rewrite with AEO-first structure
0-5 Not AEO-viable Rebuild from scratch or deprioritize

Audit Process

Step 1: Build the target query list

Before auditing pages, define what AI queries each page should answer.

Per page, identify:

  • Primary query: the single most important question this page answers
  • Secondary queries: 2-3 related questions the page should also address
  • AI phrasing: how a user would ask this in ChatGPT vs Google (usually longer, more conversational)
Page Primary query AI phrasing
/vs/notion-vs-confluence "Notion vs Confluence" "What's the difference between Notion and Confluence? Which is better for a startup?"
/what-is/revenue-intelligence "What is revenue intelligence?" "What is revenue intelligence and how does it work?"
/pricing "[Product] pricing" "How much does [Product] cost per month?"

Step 2: Test in AI engines

For each page's primary query, test in all three major AI engines:

Engine URL What to record
ChatGPT chat.openai.com Cited? Accurate? Who else cited?
Perplexity perplexity.ai Cited? Accurate? Source ranking?
Gemini gemini.google.com Cited? Accurate? How answer differs?

Record per query:

  • Were you cited? (Yes / No / Partially)
  • Was the answer accurate? (Yes / No / Inaccurate details)
  • Which competitors were cited instead?
  • What did the cited source do that yours doesn't?

Step 3: Score each page

Apply the scorecard criteria. Be honest — a 7 is a 7, not "almost a 10 with some fixes."

Step 4: Diagnose patterns

After scoring all pages, look for systemic issues:

Common pattern Typical cause Fix
Low scores on "answer-first" across all pages Editorial style prioritizes narrative over directness Create an AEO writing guide and retrain writers
No structured data on any page Schema was never part of the publishing workflow Add schema to publishing checklist, batch-add to existing pages
All pages fail recency No content refresh process Build quarterly refresh cycle
Tables missing everywhere Writers default to prose Add "use tables for comparisons" to content brief template
No authors on pages Company-branded content only Add real author bylines to all content

Step 5: Prioritize fixes

Use the impact-effort matrix:

Fix type Impact Effort Do when?
Add answer to first 50 words High Low (15 min/page) Immediately — batch all pages in a day
Add structured data High Low (30 min/page) Week 1
Rewrite H2s to question-shape Medium Low (20 min/page) Week 1-2
Add tables for comparisons High Medium (1 hr/page) Week 2-3
Add author + dates Medium Low (10 min/page) Week 1
Full content rewrite High High (4-8 hrs/page) Prioritize top 5 pages only
Add original data / unique POV Very high High (varies) Ongoing — build into editorial process

Audit Output Template

Deliver the audit as a spreadsheet or table with these columns:

Page URL Primary query Current citation? Score (/20) Top issue Fix Priority Est. time
/vs/x-vs-y "X vs Y" Not cited 8 No answer in first 50 words, no tables Rewrite intro, add comparison table P1 2 hrs
/pricing "X pricing" Cited (inaccurate) 12 Outdated pricing, no schema Update prices, add Product schema P1 1 hr
/blog/how-to-z "How to Z" Cited 16 Missing dateModified Add dateModified schema P3 15 min

Ongoing Monitoring

An AEO audit is not a one-time event. Build a monitoring cycle.

Frequency Activity
Weekly Spot-check 3-5 priority queries in AI engines
Monthly Re-test full query list (20-50 queries) across all engines
Quarterly Re-score top 20 pages with full scorecard
On content publish Run AEO checklist before every new page goes live
On product change Update any page referencing changed features, pricing, or integrations

Pre-Audit Checklist

Before starting an AEO audit:

  • [ ] Target query list built (20-50 queries minimum)
  • [ ] AI search testing accounts set up (ChatGPT, Perplexity, Gemini)
  • [ ] Page priority list defined by page type
  • [ ] AEO scorecard template ready
  • [ ] Access to CMS for implementing fixes
  • [ ] Access to schema markup tools or templates
  • [ ] AI search monitoring tool set up (Profound, Otterly, or manual tracking sheet)
  • [ ] Stakeholder buy-in for content updates (some pages may need rewriting)
  • [ ] Baseline citations recorded before making changes

Anti-Pattern Check

  • Auditing only blog posts → Blog posts are rarely the highest-value AEO pages. Start with comparison pages, definitions, and pricing. These get cited 3-5x more often
  • Scoring pages without testing in actual AI engines → The scorecard predicts AEO readiness. The real test is asking the query in ChatGPT and seeing if you're cited. Always do both
  • Fixing everything at once → Batch the quick wins (answer-first rewrites, schema markup, date additions) in week 1. Save full rewrites for the top 5 highest-impact pages
  • Auditing once and never again → AI engine behavior changes. Sources get displaced. Build a monthly monitoring cycle or the audit becomes stale
  • Only checking ChatGPT → Perplexity, Gemini, and Claude cite different sources. A page cited in ChatGPT may not be cited in Perplexity. Test all three
  • Ignoring what competitors are doing right → When a competitor is cited and you're not, read their page. Note what they did structurally that you didn't. Copy the structure, not the content
Want agents that use skill files like this?
We customize skill files for your brand voice and methodology, then run content agents against them.
Book a call