general pseo-ai-content-enrichment

pseo-ai-content-enrichment

This skill should be used when the user asks to "enrich pSEO with AI", "use AI for programmatic content", "AI content for pSEO pages", "enrich programmatic pages with AI", "generate pSEO content with AI", "AI-assisted pSEO", "add AI content to programmatic pages", "scale content enrichment with AI", or any variation of using AI to enrich, generate, or improve content for programmatic SEO pages at scale.
Download .md

pSEO AI Content Enrichment

AI content enrichment is the process of using LLMs to add unique, valuable content to programmatic SEO pages at scale. Raw data produces thin pages. AI enrichment transforms structured data into pages with unique descriptions, per-entry analysis, specific FAQ sections, and contextual content — without the cost of manually writing 500 pages.

The key distinction: AI is the production tool, not the data source. The data must come from verified sources. AI transforms that data into readable, structured content.

The AI Enrichment Pipeline

Stage Input AI action Output Human check
1. Description generation Entity name + data fields Generate a 100-200 word unique description per entry Per-entry description 20% sample review
2. Comparison context Entity data + competitor data Generate "how it compares" section per entry Per-entry comparison paragraph 20% sample review
3. Pros/cons generation Entity features + reviews Generate 3-5 pros and 2-3 cons per entry Per-entry pros/cons list 20% sample review
4. FAQ generation Entity data + PAA research Generate 3-5 entry-specific FAQ Q&A pairs Per-entry FAQ section 20% sample review
5. Expert summary All generated content + data Generate a 2-3 sentence "our take" per entry Per-entry expert callout 50% review (higher stakes)

Prompt Engineering for pSEO

The master prompt structure

Every AI enrichment task needs a master prompt that produces consistent, high-quality output across hundreds of entries.

Prompt template:

You are writing content for a [page type] about [entity type].

Context:
- Entity name: {name}
- Category: {category}
- Key data: {data_field_1}, {data_field_2}, {data_field_3}

Task: Write a [content element] for this specific entity.

Rules:
- [Word count constraint]
- [Specificity requirement: must reference the entity's specific data]
- [Banned phrases list]
- [AEO formatting requirement]
- [Uniqueness requirement: must differ from generic category description]

Output format:
[Exact format specification]

Prompt rules

Rule Why Example
Include entity-specific data in the prompt Forces AI to use real data, not generic text "Pricing: $49/month. Integrations: Salesforce, HubSpot, Slack."
Set word count constraints Prevents verbosity and ensures consistency "Write exactly 100-150 words."
Ban generic phrases Prevents AI from producing interchangeable descriptions "Never use: 'comprehensive solution', 'powerful platform', 'cutting-edge'"
Require specific comparisons Forces differentiation "Compare to [top 2 competitors by name] on at least one dimension"
Specify the output format Ensures consistent structure "Output as: 3-5 bullet points, each starting with a bold label"
Include audience context Focuses the content "Written for a B2B SaaS buyer evaluating CRM tools for a 50-person team"

Example prompts by content element

Per-entry description:

Write a 100-150 word description of {tool_name} for a B2B buyer evaluating {category} tools.

Data: Pricing starts at {price}. Key features: {features}. Best for: {best_for}. Integrations: {integrations}.

Rules:
- First sentence must define what {tool_name} does in under 25 words
- Include at least one specific number (pricing, user count, or metric)
- Compare to one named competitor on one dimension
- Never use: "comprehensive", "cutting-edge", "powerful", "seamless"
- Write in present tense, second person

Per-entry FAQ:

Generate 4 FAQ questions and answers about {tool_name} specifically.

Data: {all_data_fields}

Rules:
- Questions must be specific to {tool_name}, not generic category questions
- At least one question about pricing, one about integrations, one about limitations
- Each answer: 2-3 sentences, includes a specific fact from the data
- Never answer with "Contact sales" or "It depends"

Quality Control at Scale

The 20% review rule

Human-review at least 20% of AI-generated content before publishing any batch.

Review focus What to check Fail criteria
Accuracy Do the facts match the source data? Any factual error = fail
Specificity Is the content specific to this entry or generic? Could apply to 5+ other entries unchanged = fail
Uniqueness Is this content different from other pages? > 70% similar to another generated page = fail
Readability Does it read naturally? Obvious AI patterns (hedge words, filler, em-dashes) = edit
AEO compliance Answer in first sentence? Extractable claims? No direct answer = edit

Batch quality workflow

Step Action Decision
1 Generate content for entire batch (100-500 entries)
2 Automated checks: word count, uniqueness, banned phrases Auto-reject failures
3 Human review 20% random sample If > 10% of sample fails → fix prompt, regenerate entire batch
4 Fix individual failures
5 Publish in staggered batches
6 Monitor indexation and quality signals post-publish If indexation < 80% → diagnose and fix

What AI Should NOT Do in pSEO

AI should NOT Why What to do instead
Invent data (pricing, features, stats) Hallucinated facts damage credibility Feed verified data to the prompt. AI formats it, doesn't create it
Write the entire page from scratch Produces generic content without unique data AI enriches a data-driven template, it doesn't replace the data
Generate content without entity-specific data Output will be generic and interchangeable Always include entity-specific data fields in every prompt
Self-assess quality AI can't reliably judge its own output quality Human review is non-negotiable
Replace expert judgment AI can't provide genuine expert opinions Label AI-generated takes as overview, not expert opinion

Pre-Enrichment Checklist

  • [ ] Verified data source built with all required fields per entry
  • [ ] Master prompts written and tested on 10 sample entries
  • [ ] Prompt includes entity-specific data (not just name)
  • [ ] Banned phrase list defined and included in prompts
  • [ ] Word count constraints set for each content element
  • [ ] AEO formatting requirements included in prompts
  • [ ] Automated quality checks built (word count, similarity, banned phrases)
  • [ ] 20% human review process defined with fail criteria
  • [ ] Batch regeneration process ready (if sample fails > 10%)
  • [ ] Post-publish monitoring plan set (indexation, quality signals)

Anti-Pattern Check

  • Prompting AI with just the entity name → "Write about HubSpot" produces generic content. "Write about HubSpot given: pricing $20/seat, 150K customers, integrates with Salesforce/Slack/Gmail, best for SMBs" produces specific, useful content. Always include data
  • No banned phrase list → AI defaults to "comprehensive", "powerful", "cutting-edge", "seamless" across every page. Ban these and 20+ similar words to force specificity
  • Skipping human review → AI hallucinates facts, produces near-duplicates, and occasionally generates nonsense. The 20% review catches systemic issues before 500 bad pages go live
  • Using the same prompt for every content element → Description prompts, FAQ prompts, and comparison prompts need different structures. One master prompt for everything produces inconsistent quality
  • AI generates data it wasn't given → If the prompt doesn't include pricing and the AI outputs "$49/month" — that's a hallucination. Instruct AI to only use data provided in the prompt. Never invent facts
  • No similarity check across generated content → AI may produce very similar descriptions for similar entities. Run pairwise similarity checks. If two pages are > 70% similar, regenerate one with a differentiation-focused prompt
Want agents that use skill files like this?
We customize skill files for your brand voice and methodology, then run content agents against them.
Book a call