When we started this analysis, the question seemed simple: where do AI models actually get their information about brands?
The answer turned out to be more useful, and more uncomfortable for brand teams, than we expected.
We analyzed 10,000 AI responses on ChatGPT, tracking which sources the model cited when answering brand-related queries. We looked at source type, query intent, and brand size. The pattern that emerged was consistent enough that it’s hard to ignore.
LLMs don’t trust brands to describe themselves. At least, not when there’s any evaluation involved.
What We Tracked and How
We grouped queries into three types that reflect how buyers actually interact with AI during a purchase process:
- Definitional queries (“what is [brand]”)
- Sentiment and evaluation queries (“is [brand] good”, “should I buy [brand]”)
- And category queries (“best X for X”).
For each response, we tracked which URLs appeared in ChatGPT’s citation packs, how far down brand-owned pages appeared when they did show up, and how that pattern shifted depending on brand size and category.
Definitional Queries: The One Place Brands Have Control
In every definitional query we ran, brand-owned pages led the citation pack. For a mid-market B2B software brand we analyzed, the citations were almost entirely the brand’s own domain. No review platforms. No third-party comparisons. The model went straight to the source.
This makes sense. A definitional query has no evaluative component. The model isn’t weighing options or synthesizing opinions, it’s looking for a factual description, and the most authoritative source for what a brand is happens to be the brand itself.
For very large, category-dominant brands the pattern holds but expands.
The citation pack included the brand’s own domain heavily, but also Wikipedia, and other large scale media like TechTarget, institutional sources that exist to define things authoritatively. The brand still dominates, but third parties show up as credibility anchors in a way they simply don’t for smaller brands.
The implication is straightforward: definitional queries are the one query type where your own content has direct and measurable influence. If your pages are unstructured, vague, or hard to extract from, you’re losing the clearest opportunity you have.
Sentiment Queries: Where Brands Lose the Narrative
This is where the picture changes completely.
In every sentiment query we ran, review platforms appeared before brand-owned pages in the citation pack. For queries like “is [brand] good” and “should I buy [brand]”, Capterra, Trustpilot, Software Advice, and G2 led the citations. Brand-owned pages appeared, but only after third-party sources had already framed the answer.
The model isn’t reading your case studies or your testimonials page. It’s reading what customers wrote on third-party platforms and treating that as the authoritative picture of how people feel about you.
There’s a second finding here that’s more uncomfortable. For sentiment queries on mid-market brands, competitor-owned pages also appeared in the citation pack.
Pages titled “Top 5 Alternatives to [brand]” and “Best [brand] Competitors” — published by competing vendors, were being surfaced as sources when a buyer asked whether a brand was worth purchasing.
A question that starts as an evaluation of your brand can end with the model citing a competitor’s own marketing as a reference point.
For large, category-dominant brands the sentiment query looks different, but not necessarily better.
When we ran sentiment queries on a brand operating across multiple business lines, the citation pack couldn’t cleanly separate consumer sentiment from employee sentiment.
A buyer asking whether to use the product was being served sources about working conditions and management culture alongside customer experience reviews. Brand size doesn’t protect you from narrative bleed.
Category Queries: Brands Disappear Entirely
Category queries are where the sourcing pattern becomes most variable, and where the type of question matters more than the category itself.
For evaluative queries like “best software for X” or “top tools for X”, brand-owned pages were absent from the citation pack. The sources were Forbes, G2 category reports, media roundups, industry ranking websites, and independent comparison sites. The model pulled from sources that had already done the evaluation work, not from the vendors being evaluated.
But not all category queries are evaluative. “Where can I buy X” or “which brands sell X” produced a different pattern entirely. Brand-owned pages appeared directly, because the question isn’t asking for an opinion. It’s asking for a list of sources, and brands can be the sources.
For evaluative category queries, the path to visibility runs through third-party sources like analyst reports and editorial roundups. But it’s not purely passive. Brands that publish their own comparison content and listicles do appear in citation packs, though that pattern is showing early signs of shifting as models get better at distinguishing independent editorial from brand-produced evaluation content. It’s worth doing, but worth watching.
For transactional category queries, your own pages have a direct role. Structure and extractability matter in the same way they do for definitional queries.
Brand Size Changes Who Owns the Narrative
One of the clearest patterns across the dataset was how brand size shifted the sourcing dynamic.
For established brands with significant media coverage and review volume, third-party sources dominated across sentiment and category queries. The model had enough external material to construct answers without leaning on brand-owned pages.
For smaller or less-covered brands, the model fell back on the brand’s own website more consistently, simply because there wasn’t enough third-party content to pull from.
In some cases, brand-owned pages were the primary source across multiple query types not because the content was better, but because the alternative was a thin or inconsistent third-party record.
This creates two very different strategic situations. Enterprise brands have a source ecosystem problem. The narrative is being built from dozens of external sources they don’t control, and the work is about influencing that ecosystem rather than optimizing owned content.
Mid-market and emerging brands have a different problem: their own site carries disproportionate weight, which makes content structure and accuracy more directly impactful.
A single outdated pricing page or a vague ICP description can distort how the model represents them across multiple query types.
Neither situation is easier. They just require different responses.
Citation Position Matters as Much as Citation Presence
One finding that gets lost when you only track whether your brand appears: where it appears matters just as much.
In every sentiment query we ran, brand-owned pages appeared further down the citation pack, after review platforms and third-party sources had already shaped the answer. By the time the model surfaced a brand’s own page, the narrative had largely been determined by what came before it.
Position reflects the order in which the model weighted its sources. Leading sources shape the answer. Later sources qualify it at best. A brand page appearing fifth in a sentiment query citation pack isn’t contributing much to what the model actually tells the buyer.
This is why tracking citation presence alone isn’t enough. You need to know where your sources are appearing, which types of sources are leading the pack for each query type, and whether those leading sources are carrying an accurate picture of your brand.
The Source Map Most Brand Teams Don’t Have
Most marketing teams have a reasonable picture of their owned content and some visibility into their review platform presence. Very few have systematically mapped which third-party sources LLMs are actually pulling from when someone asks about their brand.
That map is worth building. It tells you where the narrative is coming from, which sources lead for which query types, and where the inaccuracies are concentrated. Without it, you’re making content decisions based on assumptions about what the model reads, while the sources that actually shape buyer perception go unmanaged.
Navigate the future of search with confidence
Let's chat to see if there's a good fit
More from Previsible
SEO Jobs Newsletter
Join our mailing list to receive notifications of pre-vetted SEO job openings and be the first to hear about new education offerings.