AI Citation Audit: Finding Where You Are and Aren't Being Cited

Run an AI citation audit to find where your brand appears in ChatGPT, Perplexity, and Google AI. Step-by-step process for finding and closing citation gaps.

When was the last time you asked ChatGPT or Perplexity to recommend a product in your category? And did your brand actually show up?

If you haven't checked, you're flying blind. Most marketing teams track their Google rankings religiously but have zero visibility into whether AI platforms cite them, mention competitors instead, or hallucinate inaccurate information about their brand. That gap is getting more expensive by the day.

ChatGPT has 800 million weekly active users. Perplexity processes over 500 million monthly searches. 50% of B2B buyers now start their research with AI chatbots over Google, according to G2 research. If your brand is invisible in those conversations, you're losing pipeline you didn't know existed.

An AI citation audit closes that gap. It's a systematic process for finding where your brand appears in AI responses, where it's missing, and where competitors are capturing the attention you should be getting. Here's how to run one.

What an AI Citation Audit Reveals

An AI citation audit maps your brand's presence across the AI platforms that influence buying decisions. It answers four questions that traditional SEO tools can't.

First, where do you appear? Which prompts trigger AI responses that mention your brand, link to your content, or recommend your product? These are your citation wins, and understanding them tells you what's working in your current content strategy.

Second, where are you missing? Which high-value prompts return responses that don't mention your brand at all? These gaps represent lost demand. Every prompt where a competitor appears and you don't is a prospect who chose someone else without ever seeing your option.

Third, what are AI platforms saying about you? When your brand does appear, is the information accurate? Are pricing details correct? Are features described honestly? AI hallucinations about your brand can be worse than invisibility because they actively mislead potential customers. We covered how to detect and fix these in our guide on AI hallucinations about your brand.

Fourth, how do you compare to competitors? When a user asks for recommendations in your category, which brands get named first? Which get cited as sources? Which get the most detailed treatment? This competitive view is the AI equivalent of checking your organic search rankings against the competition.

Building Your Audit Prompt List

The quality of your audit depends entirely on the prompts you test. Bad prompts give you misleading data. Good prompts reveal actionable gaps.

Start by building a list of 30-50 prompts across three categories:

Brand Prompts (10-15)

These test whether AI platforms know your brand exists and describe it accurately.

- "What is [your brand]?"
- "Tell me about [your brand] and what they do"
- "What are the pros and cons of [your brand]?"
- "How much does [your brand] cost?"
- "Is [your brand] worth the money?"
- "What do people say about [your brand]?"

Run each prompt across ChatGPT, Perplexity, and Google AI Mode. Document whether the response is accurate, outdated, or contains hallucinations. If your pricing is wrong or features are misrepresented, flag those for immediate attention.

Category Prompts (10-20)

These test whether your brand appears when users search for your product category without naming you specifically.

- "What's the best [your category] for [your target audience]?"
- "Compare [your category] tools for [specific use case]"
- "Which [your category] should I choose if I need [specific feature]?"
- "Top [your category] for small businesses in 2026"
- "[Your category] recommendations for [industry vertical]"

Category prompts are where most brands discover their biggest gaps. You might rank #1 on Google for "best AI visibility tool" but not appear at all when someone asks ChatGPT the same question. The signals that drive AI search visibility are different from traditional ranking factors, and this part of the audit shows you exactly where those differences hurt.

Competitor Prompts (10-15)

These test how your brand compares to specific competitors in AI responses.

- "[Your brand] vs [Competitor]: which is better?"
- "What are alternatives to [Competitor]?"
- "Should I use [Competitor] or [Your brand]?"
- "[Competitor] review: is it worth it?"

When running competitor prompts, note whether your brand appears as an alternative even when you didn't specifically ask about it. Brands with strong AI brand monitoring practices often discover they're being recommended in competitor contexts they never targeted.

Don't limit yourself to direct competitors, either. Include adjacent category leaders. If you sell marketing analytics software, test prompts about CRM tools, ad platforms, and SEO suites. AI platforms often cross-reference categories when making recommendations, and appearing as a complementary tool in an adjacent category can drive qualified traffic you wouldn't get from head-to-head comparisons alone.

Running the Audit Across Platforms

Different AI platforms cite sources differently. Your audit needs to account for these variations.

ChatGPT uses a mix of training data knowledge and real-time web search. 18% of ChatGPT conversations trigger web citations, according to Profound's analysis of roughly 700,000 conversations. When ChatGPT does cite sources, it tends to favor authoritative domains, Wikipedia (47.9% of citations per ALLMO research), and recent content. ChatGPT's OAI-SearchBot crawls sites every few days to weeks, so new content takes time to enter its citation pool.

Perplexity cites sources in nearly every response because it's designed as a citation-first search engine. It indexes content within hours to days due to real-time web crawling, making it the fastest platform for seeing the impact of content changes. Perplexity citations tend to favor recent, well-structured content with clear headings and sourced data.

Google AI Overviews pull from Google's existing search index, meaning you typically need to rank on page one organically to appear in an AI Overview. Google AI Overviews appear in 30% or more of searches, and brands cited in them see 35% higher organic CTR, per Seer Interactive's study of 25.1 million impressions.

Google AI Mode is Google's conversational AI experience with 100 million monthly active users in the US and India. It draws from similar sources as AI Overviews but in a more conversational format.

For each platform, document: Does your brand appear? In what position? As a recommendation, a citation link, or just a mention? Is the information accurate?

One thing I've learned from running these audits: the results can vary between sessions on the same platform. ChatGPT's responses aren't deterministic, so run each prompt 2-3 times and look for patterns rather than treating a single response as definitive. If your brand appears in 2 out of 3 runs, that's a stronger signal than appearing once. If it never appears, that's a clear gap.

Also note the language AI platforms use when recommending your brand versus competitors. Are they enthusiastic ("one of the best options") or lukewarm ("another option worth considering")? The framing matters because it directly influences whether users click through or keep scrolling.

Scoring and Prioritizing Citation Gaps

After running all your prompts, you'll have a matrix of results. Now you need to prioritize which gaps to fix first.

I score each prompt on three dimensions:

Business impact (1-5): How close is this prompt to a purchase decision? "Best [category] for [ICP]" scores 5. "What is [general concept]?" scores 2. BOFU comparison content converts at 4.78% versus 0.19% for TOFU content, according to CXL's conversion rate study. Apply the same logic to prompts.

Competitive exposure (1-5): How many competitors appear for this prompt while you don't? If three competitors are cited and you're absent, that's a 5. If nobody appears, it's less urgent.

Fixability (1-5): How easy would it be to earn a citation for this prompt? If you already have relevant content that just needs restructuring, that's a 5. If you'd need to build brand authority from scratch, that's a 1.

Multiply the three scores. Focus on prompts with the highest combined score first. A high-impact, high-competition, easily-fixable gap is worth more than a low-impact gap that requires months of authority building.

Prompt ExampleBusiness ImpactCompetitive ExposureFixabilityPriority Score
"Best AI visibility tool for agencies"54480
"AI Radar vs Semrush for AI monitoring"53575
"What is generative engine optimization"22520
"How to track AI brand mentions"43336

Fixing Your Citation Gaps

Once you've prioritized your gaps, the remediation plan follows a predictable pattern.

For Prompts Where You Have Content but Aren't Cited

This is the most common scenario and the fastest to fix. You have a relevant page, but AI platforms aren't choosing it as a source.

Restructure the page using answer-first formatting. The SE Ranking 2025 study of 129,000 domains found that pages with sections of 120-180 words between headings receive 70% more ChatGPT citations. Move your key answer to the first sentence after each H2. Add FAQ sections. Pages with FAQ sections nearly double their chances of being cited, per the same study.

Add structured data. Implement FAQPage, Article, and Organization schema in JSON-LD format. Authors with visible credentials receive 40% more citations from AI models, according to Qwairy's 2026 research. Make sure your author markup is complete.

Update the content. AI-cited content is 25.7% fresher than traditional search results, according to the Ahrefs study of 17 million citations. Add a visible "Last Updated" date, refresh statistics, and include 2026 data wherever possible. Qwairy found that adding a "Last Updated" date alone increased one guide's citation rate from 42% to 61%.

For Prompts Where You Have No Content

Create targeted content that directly answers the prompt. Don't write generic category content and hope AI platforms find it relevant. Write specifically for the prompt's intent, constraints, and context.

Articles over 2,900 words are 59% more likely to be chosen as a ChatGPT citation than those under 800 words, per SE Ranking's research. Go deep. Include named sources: content with 19 or more statistical data points averages 5.4 citations versus 2.8 for minimal-data pages.

For Prompts Where AI Says Wrong Things About You

Hallucinations require a different approach. You need to create or improve the authoritative source that AI platforms should be referencing instead of whatever training data is producing the error.

Publish clear, factual content on your own site that directly addresses the incorrect claim. If ChatGPT says your product costs $199/month but it actually costs $39-289/month, make sure your pricing page is clearly structured, schema-marked, and recently updated. Build entity signals so AI platforms can confidently attribute accurate information to your brand.

Hallucination fixes take time. ChatGPT's model updates don't happen instantly, and its web search may still pull outdated cached information for weeks. Perplexity is faster to correct because of its real-time crawling. Google AI Overviews tend to self-correct as your organic content improves. The key is persistence: publish the correct information, ensure it's the most authoritative source available, and the AI platforms will eventually converge on it.

For severe hallucinations, such as AI telling users your product is discontinued, has a major security flaw, or costs 10x what it actually does, consider filing feedback directly with the AI platform. ChatGPT and Perplexity both have feedback mechanisms, and flagging factual errors about your brand can accelerate corrections.

Setting Up Ongoing Monitoring

A one-time audit gives you a baseline. Ongoing monitoring turns that baseline into a competitive advantage.

The minimum viable monitoring setup includes:

- Weekly spot checks: Run your top 10 highest-priority prompts across ChatGPT and Perplexity every week. Track changes in citation presence and accuracy.
- Monthly full audit: Re-run your complete 30-50 prompt list monthly. Update your scoring matrix and reprioritize gaps based on new results.
- Quarterly authority review: Check your brand mention coverage across industry publications, review sites, and directories. Brand web mentions have the strongest correlation (0.664 Spearman) with AI visibility, according to Ahrefs' study of 75,000 brands.

Manual tracking works when you're starting out, but it doesn't scale. For brands with more than 20 target prompts, automated monitoring tools save hours per week and catch changes that manual checks miss. AI Radar automates ChatGPT citation tracking across your prompt portfolio, running daily scans and flagging visibility changes.

Other platforms in the space include Semrush AI Visibility ($99/month for standalone access, tracks 5 platforms) and Otterly AI (starting at $29/month for 15 prompts across 7 platforms). The right tool depends on your budget and which AI platforms matter most for your audience. Our comprehensive tool comparison breaks down the full market.

Your First Audit in 7 Days

Here's a realistic timeline for running your first AI citation audit:

Day 1-2: Build your prompt list. Pull questions from sales calls, draft category prompts for your top 5 keywords, and write comparison prompts for your top 3 competitors. Target 30-40 total prompts.

Day 3-4: Run prompts across platforms. Test every prompt on ChatGPT, Perplexity, and Google AI Mode. Document citations, mentions, accuracy, and competitor presence in a spreadsheet.

Day 5: Score and prioritize. Rate each gap by business impact, competitive exposure, and fixability. Sort by priority score.

Day 6-7: Build your remediation plan. For your top 10 gaps, identify whether you need to restructure existing content, create new content, or fix inaccurate information. Assign timelines and owners.

The first audit always reveals surprises. Most brands find they're completely absent from 40-60% of high-value prompts. That's not a failure. It's an opportunity. Every gap you close is a citation your competitors don't have.

After your first audit, set a recurring calendar reminder. The brands that win in AI search aren't the ones who audit once. They're the ones who track their citation presence as consistently as they track their Google rankings. The signals that determine AI citations, content freshness, authority mentions, structured data, and competitive positioning, all shift over time. A monthly audit cadence keeps you ahead of those shifts.

AI search visitors convert at 4.4x the rate of traditional organic visitors, per Semrush's 2025 analysis of 12 million website visits. Finding and closing your citation gaps isn't just a visibility play. It directly impacts revenue.

Start tracking your AI citations with AI Radar →

Frequently Asked Questions

How often should I run an AI citation audit?
Run a full audit monthly and spot-check your top 10 prompts weekly. AI platforms update their citation sources regularly, so what works this month might change next month as competitors publish new content or platforms update their models.

Which AI platforms should I audit?
At minimum, audit ChatGPT, Perplexity, and Google AI Overviews. These cover the largest share of AI search traffic. Add Google AI Mode and Claude if your audience uses those platforms, which you can gauge from your referral traffic data.

How many prompts do I need for a meaningful audit?
Start with 30-50 prompts across brand, category, and competitor types. Fewer than 20 gives you too narrow a view. More than 100 becomes hard to manage manually. Scale up with automated tools when you outgrow manual tracking.

What's the difference between an AI citation and an AI mention?
A citation is when an AI platform links to your content as a source. A mention is when it names your brand without linking. Citations are more valuable because they can drive traffic, but mentions still build brand visibility and influence recommendations.

Can I do an AI citation audit without paid tools?
Yes. You can manually test prompts across ChatGPT, Perplexity, and Google AI Mode for free. Paid tools like AI Radar, Semrush, and Otterly automate the process, track changes over time, and scale beyond what manual testing can handle.

How long does it take to close a citation gap?
For gaps where you have existing content that needs restructuring, expect 2-4 weeks for Perplexity and 4-8 weeks for ChatGPT. For gaps requiring new content and authority building, expect 3-6 months for meaningful citation improvements.

How often should I run an AI citation audit?

Run a full audit monthly and spot-check your top 10 prompts weekly. AI platforms update citation sources regularly, so what works this month might change as competitors publish new content or platforms update their models.

Which AI platforms should I audit?

At minimum, audit ChatGPT, Perplexity, and Google AI Overviews. These cover the largest share of AI search traffic. Add Google AI Mode and Claude if your audience uses those platforms.

How many prompts do I need for a meaningful audit?

Start with 30-50 prompts across brand, category, and competitor types. Fewer than 20 is too narrow. More than 100 becomes hard to manage manually. Scale with automated tools as needed.

What's the difference between an AI citation and an AI mention?

A citation links to your content as a source. A mention names your brand without linking. Citations drive traffic directly, but mentions still build brand visibility and influence AI recommendations.

Can I do an AI citation audit without paid tools?

Yes. You can manually test prompts across ChatGPT, Perplexity, and Google AI Mode for free. Paid tools automate the process, track changes over time, and scale beyond manual testing limits.

How long does it take to close a citation gap?

For existing content needing restructuring, expect 2-4 weeks for Perplexity and 4-8 weeks for ChatGPT. For new content requiring authority building, expect 3-6 months for meaningful improvements.