Key takeaways
- We finally have first‑party visibility into AI answers. Microsoft’s new AI Performance report in Bing Webmaster Tools shows when your pages are cited across Microsoft Copilot, AI‑generated summaries in Bing, and select partner integrations. (Bing Blogs)
- The most useful new concept is grounding queries: the phrases the AI uses to retrieve content that is cited. It’s sampled data, but it’s the closest thing we’ve seen to “GSC for AI answers.” (Bing Blogs)
- Early data shows that AI citations don’t mirror traditional organic performance. Big conceptual topics can dominate AI even if they’re not top organic queries.
- The formats that consistently line up with AI citation behavior: key takeaways, FAQ blocks, tables/comparisons, strong visuals/video, and decision-support sections (e.g., “Why choose X?”). Microsoft explicitly calls out clear headings, tables, and FAQ sections as helpful for AI inclusion. (Bing Blogs)
- Longform still works—but only when it’s tightly scoped. AI systems can stitch answers from multiple focused pages; “mile‑wide” pages often underperform.
From black‑box LLMs to real performance data
For the last couple years, AI search has been a weird mix of high stakes and low visibility.
We’ve had plenty of opinions about what LLMs prefer: more structure, more FAQs, more authority signals, but almost none of the measurement you’d expect from a mature channel. No Google Search Console equivalent. No query‑level reporting you could trust. Mostly just screenshots, anecdotal tests, and third‑party tools that approximate reality.
That changed in February 2026.
Microsoft launched AI Performance in Bing Webmaster Tools, which reports how your content appears as citations in AI‑generated answers across Microsoft Copilot, AI‑generated summaries in Bing, and select partner integrations. This is pretty huge news since it’s the first time many of us have had first‑party, GSC‑like visibility into how LLM‑style experiences are using our pages.
Microsoft’s dashboard includes metrics like Total Citations, Average Cited Pages, Grounding queries, and page‑level citation activity—plus trends over time. (Bing Blogs)
Two important caveats right up front:
- Citations ≠ clicks. This report is about whether you were referenced, not whether users visited your site. (Semrush)
- Citations ≠ ranking or “credit.” Microsoft is explicit that these metrics don’t indicate placement, authority, or the role your page played inside the answer. (Bing Blogs)
Even with those limitations, it’s an unlock. Now we can ask the practical question content teams actually need answered: Which content formats win in AI answers?
What are “grounding queries,” and why do they matter?
Microsoft defines grounding queries as the key phrases the AI used when retrieving content that was referenced in AI‑generated answers, and notes that the grounding query data shown is a sample of overall citation activity. (Bing Blogs)
In practice, grounding queries behave differently than classic SEO keywords:
- They often look like topic‑level conceptualizations, not one exact user query.
- They can represent a roll‑up of many long‑tail prompts that all require similar grounding.
- They’re a better directional signal for how AI systems understand and contextualize your content than manually‑seeded prompt tools alone.
Grounding queries are almost like AI’s topic clusters derived from actual user data. That’s why they’re so valuable for formatting decisions.
If you only use seeded prompts (e.g., in Profound/Athena/other GEO tools), you risk optimizing for the prompts you think matter. Grounding queries reveal how AI systems actually retrieve answers.
The surprise gap between AI and traditional organic search
When we pulled Atlassian’s early AI Performance data, the biggest insight was that AI citation “winners” can be totally different from your top organic queries.
Agile: #1 in AI, invisible in top organic queries

In Bing’s AI data, “Agile” was the #1 grounding query for Atlassian, at roughly 170,000 average monthly citations in the reporting window we reviewed. But in Google organic reporting, “Agile” wasn’t even showing up as a top query in the same way you’d expect if you’ve been living in GSC.
Interpretation:
- AI systems are discovering Atlassian as an authority for high‑level Agile education, even when that topic isn’t driving the same “top query” footprint in classic organic reporting.
- LLM experiences appear less anchored to the branded/bottom‑funnel bias we’re used to measuring and more anchored to: “Which source explains the concept clearly and can be cited safely?”
Kanban and JSM: more proof it’s not 1:1
Two more “wait, what?” moments:
- Kanban showed up as a top-cited grounding query (e.g., 7th in the set we reviewed) despite weaker traditional organic performance signals.
- Jira Service Management (JSM) appeared surprisingly high relative to Jira if you’re thinking in Google query terms.
The point isn’t the specific ordering. It’s that LLM search relevance is not a mirror of traditional SEO performance. So, if your AI strategy is “take the pages that rank in Google and make them longer,” you’re likely missing the bigger picture.
The content formats that win in AI answers
To get past speculation, we looked at the pages that were cited for high‑performing grounding queries—especially the Agile and Kanban education pages. Several formatting patterns were hard to ignore.
Key takeaways modules near the top
Both pages feature key takeaways close to the top.

Why this aligns with AI behavior: AI answers are typically structured as compressed summaries plus optional detail. A “key takeaways” block gives retrieval systems a high‑signal chunk that is:
- easy to extract
- easy to validate against the rest of the page
- and low risk to quote or paraphrase
This is one of those tactics that feels almost boring—and that’s why it works. If you only change one thing across a library of educational pages, a “key takeaways” block can be a high-ROI starting point.
Implementation tips
- Put it above the fold or right after the intro.
- Use bullets, not long prose.
- Make each bullet a complete thought (so it can stand alone as a citeable unit).
Multimodal content: video + rich imagery

Both pages also include a meaningful amount of visual explanation:
- Diagrams
- Illustrative graphics
- Embedded video
Why it matters (directionally):
Multimodal assets can act like “completeness” signals. In other words, you didn’t just define the term; you demonstrated it. Visuals often force clarity. If you can diagram it, you likely structured it well. Microsoft’s own guidance for improving inclusion in AI answers emphasizes reducing ambiguity and aligning content across formats. (Bing Blogs)
Implementation tips
- Aim for at least one “explain it visually” asset per core topic page.
- Make sure the text and visuals describe the same entities and claims (no contradictions).
Structured Q&A “answer units”
Both pages contain an FAQ block.

This is the most straightforward alignment with LLM outputs:
- Users ask questions
- LLMs answer questions
- FAQ blocks provide atomic Q→A pairs that are easy to retrieve and cite
Microsoft explicitly calls out that clear headings, tables, and FAQ sections help surface key information and make content easier for AI systems to reference accurately. (Bing Blogs)
Implementation tips
- Don’t bury FAQs under 3,000 words of content.
- Write questions the way humans ask them.
- Keep answers concise, but include enough specificity to be trustworthy.
Decision-support sections: “Why choose X?”
The Agile page includes a “Why choose Agile?” section.

This maps to something many teams are noticing: LLM experiences don’t just inform, they recommend. They help users decide what to do next, what to pick, and which approach to take.
So content that stops at “what is it” may underperform content that continues into:
- Who it’s for
- When to use it
- Tradeoffs
- How it compares to similar solutions
- Why users should choose it over alternatives
Implementation tips
- Add a short “Is X right for you?” section.
- Include a mini comparison (even a simple bullets‑based one).
- Link to next steps (templates, examples, implementation guides).
Tables and comparisons still matter (when they’re doing real work)
Tables win when they reduce ambiguity. Microsoft itself recommends tables as a structure that helps AI systems surface key information.

Where tables tend to help most:
- “X vs Y” comparisons
- Pros/cons
- Definitions and terminology
- Step‑by‑step processes (sometimes better as a table than prose)
- Decision matrices (useful for recommendation‑style answers)
Where tables don’t help:
- Decorative tables that just reformat paragraphs
- Tables with vague labels (“good,” “better,” “best” without criteria)
- Tables that require tons of surrounding context to interpret
Longform vs focused: how much depth is enough for AI?
This is where a lot of content teams get stuck: “Do we need longer content to win in AI?” What we saw suggests a more useful framing.
Editing down beats “mile‑wide” pages
The Agile page had historically accumulated breadth—extra tangents, galleries, adjacent roles (“Agile coach”), and other expansions that made it less “about one thing.”
It was edited down to focus much more clearly on:
- What Agile is
- Why it matters
- How to use it
Hypothesis (supported by how LLM systems work): LLM systems can assemble holistic answers from multiple pages, so they don’t need one mega‑page that tries to answer everything. They need pages that are:
- Clearly about a single topic
- Internally consistent
- Easy to mine
So, “longform” works when it’s tight, not when it’s sprawling.
Topic focus tends to beat product focus (right now)
Traditional SEO for many SaaS brands is dominated by:
- Branded queries
- Product comparisons
- Bottom‑funnel pages
But in the AI citation data we reviewed, the big entry points were conceptual/educational topics (Agile, Kanban, DevOps, etc.), not product feature pages. This doesn’t mean product pages don’t matter. It means that if your goal is AI discovery, don’t start with the assumption that “we need to optimize our pricing page for AI.” Start with the pages AI already trusts as educational sources and build from there.
Using AI performance data to guide content investment
Once you can see grounding queries and cited URLs, you inevitably hit a planning question:
Double down on winners or fill the gaps?
We debated two instincts:
- Double down on what’s already working (Agile, DevOps, Kanban, etc.)
- Force investment into weaker topics to “balance” coverage
The practical take (and the one I’d bet on early):
- Lean into what’s working.
- Use the data to find adjacent gaps and expansion opportunities.
- But don’t waste cycles shoehorning content into AI workflows when the system is clearly finding value elsewhere.
Concrete example: RACI in an AI/agent world
One planning approach we liked:
- Use AI citation data to surface strong concepts inside a cluster (e.g., RACI charts appearing under Agile / Confluence‑adjacent grounding queries).
- Create a supporting, opinionated post that modernizes the topic: “What does RACI look like in an agent‑driven world?”
- Then interlink it intentionally: Blog ⇄ core educational explainer ⇄ templates/product pages
This is how you turn “we’re getting cited” into “we own the topic.”
Practical playbook: how to optimize formats for AI
Here’s a repeatable workflow you can apply whether you’re a SaaS, publisher, or marketplace.
Step 1: Mine AI citation data (Bing Webmaster Tools)
- Pull your top grounding queries and top cited pages.
- Flag surprises: topics that outperform their organic footprint.
Remember what the report is (and isn’t):
- It measures citation visibility, not clicks or rankings. (Semrush)
- Grounding queries are sampled. (Bing Blogs)
Step 2: Audit winners for format patterns
For each top-cited page, ask:
- Is there a key takeaways block near the top?
- Is the page clearly a “What is X?” explainer?
- Is there an FAQ section?
- Are there tables/comparisons that reduce ambiguity?
- Is there at least one strong visual or video?
- Is there decision support (why choose, who it’s for, when to use)?
Step 3: Refactor existing pages toward winning patterns
Start with the low‑lift, high‑impact changes:
- Add a key takeaways block.
- Add an FAQs block.
- Tighten headings so each section answers one question.
- Add a table when it can clarify a decision.
- Trim bloat that dilutes the main topic.
Microsoft’s own optimization guidance explicitly calls out structure improvements like clear headings, tables, and FAQ sections. (Bing Blogs)
Step 4: Build “surround sound” around your winners
For each winning topic, create a cluster AI can draw from:
- Opinionated blog posts (modern takes, tradeoffs, POV)
- Use‑case deep dives
- Templates and how‑to’s
- Glossaries and definitions
- Internal links that connect these assets tightly
Step 5: Treat the data as directional (but don’t ignore it)
It’s a public preview, and like any new reporting layer, it has limits:
- It’s citation counts, not business outcomes. (Search Engine Land)
- It’s aggregated across multiple AI surfaces. (Bing Blogs)
- Microsoft and third parties note missing clarity around partner coverage. (Semrush)
But directionally, it’s extremely actionable. As Semrush put it, Bing’s report is the first major platform move toward dedicated AI appearance reporting, and it answers the core publisher question: “Is my content actually being referenced?” (Semrush)
Designing content for an AI-first discovery layer
AI search isn’t just “SEO with longer queries.” It’s a different discovery layer, with different incentives:
- It rewards clarity over cleverness
- structure over sprawl
- and decision support over pure definition
In the early Bing AI Performance data, we saw two big lessons:
- Topic authority can show up in AI before it shows up in your top organic query mix.
- The formats that win are the ones that make information easy to retrieve, verify, and reuse.
If you treat AI as a channel with its own reporting, you can finally move beyond the guesswork and start designing content formats with feedback loops.
Navigate the future of search with confidence
Let's chat to see if there's a good fit
More from Previsible
SEO Jobs Newsletter
Join our mailing list to receive notifications of pre-vetted SEO job openings and be the first to hear about new education offerings.




