
Full disclosure: This post contains affiliate links. If you make a purchase through these links, 99signals may earn a commission at no additional cost to you.
Here’s a scenario that should unsettle every marketer who’s spent years building on-site content.
Seer Interactive, a well-known digital marketing agency, discovered that the phrase “high account manager turnover” was appearing in AI responses about their brand. Not occasionally. 67 times over a three-month period, across ChatGPT, Perplexity, and Google’s AI Overviews.
The source? A handful of review sites, including complaints that were years old.
Their actual retention rate over the past 12 months was 79.2%, which aligns with or exceeds industry averages. Didn’t matter. The AI models had already built their narrative from the open web, and the open web had a different story to tell.
This isn’t a cautionary tale about one agency’s reputation management problem. It’s a structural shift in how brand visibility works, and most marketing teams haven’t caught up to it yet.
Table of Contents
The Shift: Third-Party Citations Now Outweigh On-Site Content in AI Search
For two decades, the SEO playbook was straightforward. Publish quality content on your domain. Build backlinks. Earn topical authority. Control your brand narrative through what you put on your website.
That playbook still matters for traditional Google rankings. But for AI search, the hierarchy of what gets cited has inverted.
When ChatGPT, Gemini, Perplexity, or Google’s AI Overviews generate a response about your brand or category, they’re pulling from a fundamentally different set of sources. Reddit threads. G2 and Capterra reviews. LinkedIn posts. Industry publications. Quora answers. YouTube transcripts. News articles. Forum discussions from 2019 that you forgot existed.

Your meticulously optimized landing page? It might be in the mix. But it’s competing with every third-party mention of your brand across the entire indexable web, and AI models often weight third-party validation more heavily than first-party claims.
Which makes sense if you think about it from the model’s perspective: a brand saying “we’re great” is marketing, but a dozen independent sources saying “they’re great” is evidence.
The Seer Interactive data makes this concrete.
When they analyzed their branded prompt outputs, their own website appeared in 286% of LLM responses (meaning multiple Seer URLs were often cited in a single response).
But that single recurring negative review showed up as a source in 38% of 1,152 branded prompt outputs.
Their own content was present, but the narrative was being shaped by what others said about them, not what they said about themselves.
The Citation Economy: Where Brand Reputation Actually Lives Now
Think of AI search as a citation economy. In traditional SEO, you controlled your content and earned links. In AI search, what gets cited is largely decided by third-party mentions. The competitive battleground has moved off your domain.
This creates a few dynamics that most marketing teams aren’t equipped to handle.
1. Old content has an outsized shelf life in AI.
A frustrated customer’s Reddit post from 2021, a negative Glassdoor review from 2019, a comparison blog post from 2023 that positioned your competitor favorably, all of this lives in AI training data and gets surfaced in real-time retrieval.
You may have fixed the product issue, resolved the complaint, or leapfrogged the competitor.
The AI doesn’t know that unless the web reflects it.
2. AI models have a strong freshness bias, which is both a threat and an opportunity
Seer Interactive’s research found that nearly 65% of AI bot hits targeted content published in the past year, and 89% of all bot hits occurred on content published within three years.
This means outdated negative content can be displaced by newer, more accurate content, but only if you’re actively publishing it.
Sitting on a static website while the third-party conversation evolves around you is a guaranteed way to lose narrative control.
3. The format of your content matters more than the volume
Research consistently shows that AI models favor structured content, Q&A formats, clear headings, specific data points, and sentences that can stand alone as extractable answers.
Dense paragraphs of marketing copy perform worst.
If your brand’s most important pages read like brochure language rather than direct answers to specific questions, AI models will skip you and cite someone who structured their content better.
The Shadow Reputation Problem
Most brands have no idea what narrative AI is constructing about them.
Ask yourself: when was the last time you systematically queried ChatGPT, Gemini, Perplexity, and Google’s AI Mode with branded prompts like “What is [your brand’s] reputation?” or “What are the pros and cons of [your product]?” or “Compare [your brand] to [competitor]?”
If the answer is never, you’re flying blind in the channel that 43% of consumers have used to discover a new brand (according to a Semrush survey of 1,030 U.S. shoppers), and where 22% have completed a purchase directly inside an AI tool.
The Seer Interactive team calls this “shadow reputation,” the brand narrative that exists in AI responses but doesn’t show up in your Google Analytics, your brand tracking surveys, or your social listening dashboards.
It’s the version of your brand that potential customers encounter when they ask an AI assistant whether they should hire you, buy your product, or consider your competitor instead.
And unlike a bad Google review that sits on one platform, a shadow reputation propagates across every AI model simultaneously.
Here’s the detail from Seer’s experience that should make every marketer pay attention: after they published a single blog post correcting the turnover misconception with verifiable data (79.2% retention rate, sourced and specific), the AI models picked it up within days.
After just nine days, the “high turnover” narrative stopped appearing entirely. The new post was cited nine times, and the correct retention stat replaced the old narrative across every major AI platform.
One well-structured piece of content, published with specific data and clear formatting, overwrote years of accumulated negative signal. That’s both the vulnerability and the opportunity in the citation economy.
What Actually Drives AI Citations (It’s Not What You Think)
The emerging research on what makes content get cited by AI models challenges some long-held SEO assumptions.
Seer Interactive’s March 2026 research found that the top five metrics driving LLM citations are domain authority, high-quality backlinks from DA 60+ sites, mentions in “best of” listicles, total number of backlinks, and unique referring domains. Traditional SEO isn’t dead. It’s the foundation that makes AI visibility possible.
2. Content depth and readability matter more than traffic
Research from Growth Memo found that when it comes to securing AI mentions and citations, content depth (sentence and word counts) and readability matter most, while traditional SEO metrics like traffic and backlinks have little direct impact on whether a specific piece gets cited.
3. Position in the content matters
44.2% of all LLM citations come from the first 30% of a text (the introduction). If your key claims, data points, and brand positioning are buried in paragraph twelve, AI models may never surface them.
4. Freshness is a citation signal
85% of AI Overview citations were published in the last two years. 50% of Perplexity citations come from 2025 content alone. If your most important content hasn’t been updated recently, it’s losing ground to competitors who are publishing now.
How to Take Back Your Brand Narrative
The shift from on-site SEO to third-party citation management requires a different set of muscles than most marketing teams have built. But it’s not as overwhelming as it sounds if you prioritize correctly.
1. Audit your AI brand presence first
Before you change anything, find out what AI models are actually saying about you. Query ChatGPT, Gemini, Perplexity, and Google’s AI Mode with branded prompts, competitive comparison prompts, and category-level prompts (“best [your category] tools for [use case]”).
Document the themes, the sources being cited, and any inaccuracies or outdated information. This is your baseline.
For ongoing monitoring, Semrush One is worth looking at because it bundles traditional SEO tracking with AI visibility reports that show where your brand appears (and where it doesn’t) across ChatGPT, Gemini, AI Overviews, and AI Mode.
You can see which third-party sources are being cited in your category, spot shadow reputation issues before they compound, track whether your digital PR efforts are translating into AI mentions, and identify the prompts where competitors are getting cited and you’re not.
Semrush offers a free 7-day trial of
2. Publish corrective content with verifiable data
If AI models are surfacing inaccurate or outdated information about your brand, the Seer playbook works: publish a single, well-structured piece on your own domain with specific, sourced data that directly addresses the misconception.
Make the correct information appear in the first 30% of the content.
AI models pick up fresh, authoritative corrections quickly.
3. Invest in the platforms AI models actually pull from
Reddit, LinkedIn, G2, Capterra, industry publications, YouTube. These aren’t secondary channels anymore. They’re primary sources for AI training data and real-time retrieval.
A thoughtful answer on Reddit, a detailed LinkedIn post with original data, a well-written G2 review from a satisfied customer, these are now brand visibility assets in a way they never were before.
4. Structure your content for extraction, not just reading
Lead with direct answers. Use clear headings. Include specific numbers and data points. Write sentences that can stand alone as quotable facts. AI models are citation engines.
Give them something worth citing.
5. Start asking for reviews
Seer Interactive admitted in their post that in 23 years, they’d never asked clients for public reviews. The AI search shift changed their mind.
If your happy customers aren’t leaving reviews on the platforms AI models pull from, the only signal AI has is from the customers who were unhappy enough to write about it without being asked.
The asymmetry is obvious, and the fix is uncomfortable but necessary.
The Uncomfortable Truth
The uncomfortable truth about this shift is that it makes marketing harder in a way that can’t be solved with tools alone.
You can monitor your AI presence, track citations, and identify gaps. But actually influencing what third parties say about you requires doing the hard, slow work of building genuine relationships, delivering exceptional service, earning real advocacy, and showing up in communities with substance rather than self-promotion.
AI models don’t care about your content calendar. They care about what the internet says about you. And the internet’s opinion is shaped by every customer interaction, every review left unresponded to, every community question you didn’t bother answering, and every competitor comparison you ignored.
The brands that win in the citation economy won’t be the ones with the best on-site SEO (though that still matters). They’ll be the ones that the rest of the internet can’t stop talking about positively.
That’s always been true. AI just made it measurable.
Related Articles
