A mid-market SaaS brand with multiple products, integrations, and regional sites was losing AI citations and rich results after a redesign.
Entities were duplicated, IDs changed, and bios went stale.
We rebuilt the entity graph, standardized schema, and set measurement that tied fixes to revenue.
In this case study you'll see what we did, the stack we used, and the results—plus a reusable checklist.
Use this alongside our entity pillar at Entity Optimization: The Complete Guide & Playbook and structured data pillar at Structured Data: The Complete Guide for SEO & AI.
The problem
Duplicate
@idvalues for products and authors across EN/PT/FR.Integration pages lacked consistent names; assistants cited competitors instead.
Product prices and availability mismatched between schema and page.
Author bios outdated; reviewer schema missing on sensitive content.
Internal links broke after redesign; pillars and supports were orphaned.
AI Overviews stopped citing the brand for core “AI search optimization” queries.
Objectives
Restore entity clarity for brand, products, integrations, authors, and locations.
Regain rich results and AI citations; improve CTR and demo requests.
Build governance to prevent ID drift and schema breakage in future releases.
Approach overview
Audit entities, IDs, schema coverage, and AI citations.
Rebuild the entity map and sameAs policy.
Update templates and content with stable schema and answer-first intros.
Rewire internal links to match the entity graph.
Monitor AI citations, coverage, and performance with dashboards.
Step 1: audit
Crawled 5k URLs for JSON-LD presence, required fields, duplicate IDs, and about/mentions.
Pulled Search Console for branded/entity queries and rich result reports.
Ran prompt tests in AI Overviews, Perplexity, and Copilot for top entities.
Collected salience scores via Google NLP on pillar pages to see if brand/products ranked as primary entities.
Logged mismatches: price vs page, hours vs schema, author vs byline.
Step 2: entity map and policies
Created a single ID map for Organization, Products, Features (ProductModel), Integrations (SoftwareApplication), Authors (Person), and Locations (LocalBusiness).
Added naming conventions with disambiguation (e.g., “Product X — Analytics” vs “Product X — Integrations”).
Standardized sameAs links (LinkedIn, GitHub, Crunchbase, app stores) and removed low-trust profiles.
Documented about/mentions rules per template.
Step 3: template and content fixes
Updated Article, Product, and Integration templates to reference IDs from the registry.
Added Product/SoftwareApplication schema with offers, identifiers, and brand links; fixed price parity.
Refreshed author bios with credentials and sameAs; added reviewer schema on YMYL content.
Added about/mentions to articles to tie to products and integrations; rewrote intros to define entities in first 100–150 words.
Implemented BreadcrumbList and WebSite searchAction across templates.
Step 4: internal linking rebuild
Pillars linked to all supports; supports linked back and to sibling topics.
Integration guides linked to partner pages and product pillars with entity-rich anchors.
Author cards added to relevant clusters; location info linked to support content where local.
Fixed orphaned pages; ensured supports were within three clicks of home.
Step 5: validation and monitoring
CI linting for required fields and duplicate IDs; Playwright checks for rendered schema.
Weekly crawls extracting JSON-LD; parity scripts for price/availability.
Prompt bank logged monthly; citations tracked by entity and market.
Dashboards in Looker showing coverage, errors, AI citations, CTR, and demo requests by cluster.
Results (90 days)
Rich result eligibility restored for Product and Article; errors reduced by 92%.
AI Overviews citations: from 0 to 18 mentions across core queries; Perplexity/Copilot citing correct product specs.
CTR: +14% on product guides; +9% on integration pages.
Demo requests: +11% from entity-led clusters; lift strongest on integration content.
Time-to-fix for schema incidents dropped from 10 days to <2 days.
Dashboard snapshot (metrics we tracked)
Coverage: % pages with required schema per template; duplicate ID count.
AI citations: count/share per entity; accuracy notes.
Performance: impressions/CTR for entity queries; demos/calls from entity pages.
Freshness: age of bios, prices, and screenshots; alerts >120 days.
Trust: review ratings (if applicable), authority of new citations.
Tool stack used
- ID registry: Airtable with
@id, type, sameAs, owner, last updated. - Schema templates: component-based JSON-LD in the design system; Git-based versioning.
- Validation: CI lint for required fields/IDs; Playwright rendered checks; Screaming Frog custom extraction weekly.
- Parity scripts: Python comparing schema vs on-page values for price, availability, author names.
- NLP: Google NLP API to track salience for brand and products on pillars.
- Prompt logger: Python harness capturing AI Overviews/Perplexity/Copilot outputs monthly.
- Dashboards: Looker with Search Console API + GA + prompt logs + crawl results.
- Alerts: Slack webhooks on schema errors, duplicate IDs, citation drops >20%.
Timeline and milestones
- Week 1: audit and ID map drafted; prompt bank created; baseline metrics captured.
- Week 2: sameAs cleanup; schema lint in CI; fixed top 20 pages.
- Week 3: refreshed author bios and added reviewer schema; implemented about/mentions on pillars.
- Week 4: price parity scripts live; Playwright rendering checks added.
- Week 5: integration pages rewritten with consistent IDs and anchors; partner pages aligned.
- Week 6: dashboards launched; first prompt log showed 6 citations and 4 errors.
- Week 8: errors resolved; citations climbed to 12; CTR uptick visible.
- Week 10: localization pass kept IDs stable; added Portuguese descriptions; citations added in PT.
- Week 12: review: Entity Health Score from 54 → 82; governance checklist adopted.
Prompt bank (sample)
- “What is [Brand]'s AI search platform?”
- “Does [Brand] integrate with [Partner]?”
- “How much does [Product] cost?”
- “Who leads AI search at [Brand]?”
- “Where is [Brand] headquartered?”
- “What features are in [Product]?”
- Logged monthly with accuracy notes and fixes.
Content changes that mattered
- Added crisp definitions and pricing in first paragraphs of product pages; matched schema offers.
- Inserted integration summaries with partner names and links; schema mentions mirrored anchors.
- Added FAQs answering top PAA and AI prompt questions; FAQ schema used where eligible.
- Updated screenshots and images with alt text repeating canonical names and contexts.
Pitfalls we hit (and fixed)
- Issue: plugin-generated duplicate Person schema alongside custom templates. Fix: disabled plugin schema, standardized author JSON-LD.
- Issue: mixed currencies in Offers for localized pages. Fix: locale-based price injection; ISO currency codes enforced.
- Issue: outdated integration names in sameAs. Fix: quarterly sameAs review, added validation to fail builds when sameAs 404s.
- Issue: orphaned legacy URLs after redirects. Fix: crawl for 404s; added internal link rewiring and updated sitemaps.
How governance prevented regressions
- CI blocked releases when required fields empty or duplicate IDs detected.
- Change log mandated for every schema/content release with validation links.
- Monthly prompt review caught drift in assistant descriptions before it hurt CTR.
- ID map owners assigned; edits required approval; stale IDs flagged.
Applying this to your site: step-by-step
- Run a crawl + prompt audit; log errors and misstatements.
- Build one ID map; freeze IDs; clean sameAs.
- Fix templates; add linting, rendering checks, and parity scripts.
- Refresh bios/definitions; add about/mentions and answer-first intros.
- Rebuild internal links; surface author/location cards where relevant.
- Launch dashboards and alerts; start monthly prompt logs.
- Iterate with experiments (FAQ/HowTo adds, schema enrichments, anchor tests).
Vertical-specific takeaways
- SaaS: integration clarity drives citations and demos; keep ProductModel clean; link to docs.
- Clinics: hours and practitioner credentials must stay synced; reviewer schema boosts trust.
- Ecommerce: identifiers and offer parity reduce hallucinated prices; accessories/related links aid clustering.
Reporting to leadership
- Presented Entity Health Score (0–100) with subscores for coverage, citations, trust, and impact.
- Showed before/after examples of AI answers citing the brand; included sources.
- Highlighted revenue tie: demo lift and reduced support tickets from clearer integration info.
- Shared risk dashboard: remaining errors, stale bios, and next fixes.
What we'd do next
- Add Speakable/Clip schema where eligible to steer snippets.
- Expand prompt bank to multimodal (image/video) answers.
- Build a lightweight knowledge graph store to power internal chatbot and content QA.
- Automate salience scoring in CI to flag pages where entities drop below thresholds.
CTA and services
Want similar results? AISO Hub rebuilds entity systems and measurement for AI search. Start with AISO Audit to uncover drift. Use AISO Foundation to deploy ID maps, templates, and governance. Choose AISO Optimize to expand clusters and test schema/content changes. Keep gains with AISO Monitor, tracking coverage, freshness, and AI citations.
What made the biggest difference
Stable IDs and sameAs: stopped assistants from mixing products and authors.
Answer-first definitions: AI models pulled intros verbatim.
Parity checks: removing mismatched prices reduced hallucinated numbers.
Prompt logging: surfaced misstatements quickly so we could fix schema/text fast.
Reusable checklist for your projects
Build/clean the ID map with owners and sameAs; enforce across languages.
Fix schema templates; add CI lint + rendering checks.
Refresh bios/credentials; add reviewers for YMYL.
Add about/mentions and answer-first intros to pillars/supports.
Rewire internal links to mirror the entity graph; remove orphans.
Set dashboards: coverage, errors, AI citations, CTR/conversions, freshness.
Run prompt bank monthly; log outputs and fixes.
Keep a change log; enforce release annotations.
Additional metrics we tracked
Branded refinement rate: % of branded queries needing modifiers dropped 12%.
Knowledge Panel accuracy: incorrect fields reduced from five to zero after sameAs cleanup.
Duplicate IDs: reduced from 146 to three; remaining mapped and redirected.
About/mentions coverage: from 22% to 94% on articles, improving disambiguation.
Alert response: schema incident response time shrank from days to hours after Slack alerts.
Visualization examples (described)
Coverage chart: stacked bars per template showing required vs missing fields over weeks.
Citation trend: line chart of AI citations per entity with annotations for releases and PR events.
CTR comparison: bars for pages with full schema vs partial in the same rank band.
Parity heatmap: matrix showing match rates for price, availability, author, and hours per template.
Freshness tracker: conditional formatting for bios/images older than 120 days.
Team training highlights
45-minute workshop on
@idrules, sameAs hygiene, and prompt logging.Editor checklist: answer-first intros, on-page/schema match, sources cited, reviewer noted.
Dev checklist: linting, rendered checks, duplicate ID guardrails, rollback plan.
Leadership brief: why entity clarity drives AI citations and demo lifts.
Risks we mitigated
ID churn during future redesigns: enforced ID map approvals and CI checks.
Plugin conflicts: disabled automatic schema generators; relied on templates.
Localization drift: locked IDs across locales; added locale-aware fields for currency/timezone.
Data privacy: removed personal emails from schema; used Organization contactPoint.
Performance: trimmed JSON-LD to essentials; lazy-loaded heavy media but kept schema inline.
Quick-start mini-plan (30 days)
Week 1: audit IDs/schema/citations; create ID map; fix top 10 pages; disable conflicting plugins.
Week 2: add about/mentions and answer-first intros; refresh key bios; implement linting and rendered checks.
Week 3: rewire links for one cluster; run prompt bank; set up basic dashboards.
Week 4: fix errors from prompt/crawl, align sameAs, and present early wins to secure time for full rollout.
What we'd do next
Add Speakable/Clip schema where eligible to steer snippets.
Expand prompt bank to multimodal (image/video) answers.
Build a lightweight knowledge graph store to power internal chatbot and content QA.
Automate salience scoring in CI to flag pages where entities drop below thresholds.
Lessons for different verticals
B2B SaaS
Integration entities need strict naming; align with partners' pages and sameAs.
ProductModel types help separate tiers/modules; tie to main Product.
Demo CTAs near definition blocks improved conversions.
Local services/clinics
LocalBusiness and Person IDs must stay stable; hours parity is critical.
Reviewer schema on medical content improved trust in AI answers.
Event schema for workshops boosted event carousel visibility.
Publishers
Person and Organization schema clarity raised author citations.
About/mentions on articles improved topic disambiguation; reduced miscitation to similar brands.
Knowledge Panel accuracy improved after sameAs cleanup.
Governance baked in
RACI: SEO owns entity map; engineering owns schema templates; content owns intros and bios; analytics owns dashboards; PR owns sameAs quality.
Cadence: weekly error review, monthly prompt tests, quarterly ID map audit.
Policies: no new IDs for existing entities; required fields per template; sameAs sources list; change log mandatory.
AISO Hub rebuilds entity systems and measurement for AI search.
AISO Audit: uncover drift in IDs, schema, and off-site signals with a prioritized roadmap
AISO Foundation: deploy ID maps, templates, governance, and dashboards that keep entities consistent
AISO Optimize: expand clusters and test schema/content changes to lift citations and conversions
AISO Monitor: track coverage, freshness, and AI citations with alerts and executive-ready reports
Conclusion: entity clarity compounds
Stabilizing IDs, schema, and content definitions turned a drifting site into an AI-citable source.
With monitoring, prompt logs, and governance, improvements stuck and revenue followed.
Use this playbook to diagnose, fix, and keep your own entities clear for search and AI assistants.

