How to Rank in AI Search

Written by Ryan Jones. Updated on 03, October 2025

Billions of search queries are now flowing through AI search interfaces every month.

ChatGPT has now crossed 800m weekly users as of May 2025. Perplexity AI logged 148.2m visits in August of 2025. And Google’s AI Overviews now appear in about 18% of searches.

The way people discover information is starting to be rewritten.

LLMs (Large Language Models) don’t send users to “10 blue links.” They deliver synthesized, direct answers. That changes the game for websites fighting for visibility.

Ranking in search results is no longer enough. To win attention in the era of LLMs, your content must earn citations and mentions inside of ChatGPT, Claude, Perplexity, and beyond. Authority, structure, and accuracy decide what gets surfaced.

In this article, we’ll walk you through how to rank in AI search.

Article Summary (TL;DR)

To rank in AI search, publish authoritative, accurate, and well-structured content that fully answers user intent. Focus on depth, original insights, and freshness to earn citations and mentions from LLMs like ChatGPT, Claude, and Perplexity. Strengthening brand authority and optimizing for crawlability, schema, and clarity further increase your chances of being surfaced.

Table of Contents

Understanding LLMs vs Traditional Search

AI search works in a different way to traditional search engines. Understanding these differences will allow you to come up with effective strategies to help you optimize for them.

Here’s how Google currently surfaces information for the prompt “best keyword research tools.”

Google search results page for best keyword research tools showing sponsored ads for Semrush and Moz Keyword Explorer, and popular tools like Google Keyword Planner, Ahrefs, AnswerThePublic, and Serpstat.

And here is that same query run in ChatGPT:

Comparison table of best keyword research tools from ChatGPT including Semrush, Ahrefs, Moz, KWFinder, Serpstat, SpyFu, Ubersuggest, and KeywordTool.io with strengths and trade-offs.

When you look at how the results are shown, you can see why more and more people are using LLMs to find information.

How LLMs Select Content for Answers

Large Language Models don’t rank pages in the same way that Google and other search engines do. These systems retrieve information from training data and real-time searches, then synthesize answers from multiple sources. Several factors drive selection decisions.

Authority signals carry significant weight.

AI platforms have a tendency to favor content from experts who are well-recognized and brands that are already established, as well as authoritative domains. For example, a research paper from Stamford will appear a lot more credible than an anonymous blog post on the same topic. Build brand recognition across the web to improve your odds of being selected.

Content depth matters more than ever.

AI systems prioritize sources that thoroughly address queries over surface-level articles. A 2,000-word guide with independent research and examples will beat a 300-word overview. Comprehensive coverage signals expertise and increases the likelihood of citations from LLMs.

Factual accuracy determines trust.

AI platforms cross-reference information across sources to verify any claims made. Content with verifiable facts, citations, and supporting data earns preference over unsupported assertions. One factual error within your content can disqualify content on your site that is otherwise strong.

Recency affects time-sensitive topics.

Content goes stale faster than most people think. Search engines give preference to fresh updates, especially around current events, tech changes, or best practice shifts. Keep your content maintained and updated so it stays useful, accurate, and competitive.

Structured information extracts cleanly.

Content that is well-organized with clear headings, proper markup, and a logical flow will make it easier for AI tools to extract information. AI systems can parse and cite structured information more accurately than dense, unformatted text.

Content Strategies for LLM Visibility

Content quality will help to determine your success in being selected in LLM answers. These strategies can help increase the likelihood that your content is cited within AI search.

Create Deep, Expert Content that Solves the Query

We already know at this point that AI platforms reward thoroughness. The content that you publish across your website should fully address the user intent, because that is what’s going to help you earn selection over partial answers.

The first thing is to start with intent research. Go beyond keywords and try to understand exactly what users want to know. For example, a search for “email marketing” might mean that someone is looking for advice on their strategy. But it could also mean they are looking for tool recommendations, or even tactical how-to guides. Each intent will demand a different form of content.

Make sure that you are also providing complete answers with all the context needed. Address the main questions plus any logical follow-up questions that users might have when they’re going through your content. An article about Conversion Rate Optimization should cover things like:

  • Measurement
  • Testing methodology
  • Common pitfalls
  • Implementation steps

Anticipate your reader’s questions and solve them before they need to ask.

You can also run your target queries through LLMs like ChatGPT, Claude, and Perplexity to see what they surface. This will give you a much better understanding of what you need to write about, or who LLMs are already surfacing that you need to do a better job than:

Perplexity search result for what is content marketing showing definition, key features, how content marketing works, and benefits with sources from AMA, Salesforce, HubSpot, and StoryChief.

You should also look to add depth to your content through real expertise. Generic advice holds less value than insights backed by experience. Share specific examples, detailed processes, and lessons from actual implementation. The difference between “segment your email list” and “segment by buyers who have purchased within the past 30 days” is everything.

Also, make sure that you’re supporting every claim with evidence. Reference studies, data, and credible sources. AI platforms verify information across multiple sources. Cited facts carry more weight than assertions. Link to original research, not secondary summaries.

Cover the alternatives and the edge cases. Don’t just present one fix as if it’s the only answer. The best content explores different angles, points out the trade-offs, and shows you’ve actually thought it through. That level of detail not only builds trust with readers, but it also helps AI systems understand the full context.

Write in a Conversational, Question-Friendly Format

LLMs use natural language processing to read queries in the same way people do. Because of this, your content needs to read as naturally as possible, like you are answering real questions.

Start with your content’s headings. Ditch vague titles and answer questions that your users are actually asking!

Here’s an example from our blog:

SEOTesting example of SEO A/B test showing click difference between control group and test group for title tag and meta description variations.

Avoid heavy jargon and formal phrasing. Write as if you’re explaining the subject to a smart colleague who isn’t deep in the field.

  • Clear: “Canonical tags tell search engines which URL to prioritize.”
  • Clunky: “Implement canonical URL specifications to consolidate indexing signals.”

And ensure your content is also written in active voice. Active voice makes your points stronger and easier to read.

  • Active: “AI platforms prioritize fresh content.”
  • Passive: “Fresh content is prioritized by AI platforms.”

Breaking your content into digestible chunks will help, too. Stick to short paragraphs of two to four sentences and use white space. This will help improve your content’s readability for people and help AI systems parse your content more effectively.

Finally, ensure that you define technical terms in context. If you introduce jargon, explain it right away.

Here’s an example:

“Your article’s Click-Through Rate – the percentage of people who click after seeing your result – shows how relevant your content is.”

Offer Original Research, Case Studies, and Unique Angles

AI platforms tend to reward content that adds something new, not just another rehash of what’s already out there. That’s why original contributions are so powerful. They’re more likely to earn mentions, citations, and authority signals.

One of the best ways to do this is through proprietary research. Surveys, experiments, and unique data analysis generate insights that nobody else can provide. A good example would be the “State of Search” reports from Datos and SparkToro:

Datos and SparkToro State of Search Q1 2025 report landing page highlighting behaviors, trends, and clicks across the US and Europe with key insights on AI search, zero-click results, regional differences, and e-commerce search.

These types of research pieces get cited by LLMs because the data simply doesn’t exist anywhere else.

Detailed case studies work in the same way. Sharing the real challenges you faced, the approach you took, and the quantified results gives readers something actionable and trustworthy. A statement like “we helped X increase organic traffic by 47% after restructuring their content hierarchy” carries far more weight than a generic tip.

The same thing applies to proprietary data that your company may already be sitting on. Data like HubSpot’s “State of Marketing” reports, drawn from its customer base, is cited lots because it surfaces insights that only HubSpot could publish. If you have access to unique data, sharing it establishes authority and creates content others want to reference.

Distinctive frameworks and methodologies are another way that you can stand out. Think of the Pirate Metrics (AARRR) framework. It’s referenced constantly because it offers a fresh mental model for growth. Interviews with recognized experts achieve a similar effect, adding credibility and unique perspectives that both AI platforms and readers value.

You can also challenge conventional wisdom when the evidence supports it. A well-reasoned contrarian take sparks discussion and citations, but it only works if it’s backed by real proof.

And finally, keep in mind that being original isn’t a one-off thing. Research and data show their age quickly, so keep your studies and reports updated. Ensuring you mark your content with a clear “last updated” date will show readers and LLMs that your information is still current.

Keep Content Fresh and Factual

AI systems are constantly assessing both the recency and accuracy of content. If your information is outdated or wrong, the chances of it being cited reduce dramatically. That’s why freshness and factual accuracy are non-negotiable.

The best way to stay on top of this is by setting clear maintenance schedules. Topics that move quickly, like social media algorithms for instance, might need monthly reviews, while more stable subjects, such as core marketing principles, may only need quarterly checks. The update frequency should always match the pace of change in that topic.

SEOTesting’s Content Decay Report can help with this. You can use it to determine which pages have lost clicks (signalling that it might be outdated) so you can work out what content needs refreshing:

SEOTesting Content Decay Report showing URL performance with monthly click data over 12 months, highlighting content losing clicks and identifying opportunities for updates.

Keep close to what’s happening in your industry. Set up Google Alerts, follow the right communities, and watch out for new research, regulations, or best practices. When things change, make sure your content changes with it.

Be transparent about your updates. Display both the original publish date and the update date if your CMS allows. Far from hurting your credibility, this transparency will help to build trust with readers and signal to LLMs that your content is being actively maintained.

Accuracy matters just as much as freshness. Every fact should be verified against authoritative sources, ideally linking back to the original research rather than second-hand blog posts. Outdated stats, old examples, or references to retired tools are all red flags. If you make an error, fix it fast. And for bigger corrections, use an editor’s note.

Technical Optimizations to Boost LLM Visibility

Technical implementation affects how AI platforms discover, crawl, and extract content. These optimizations improve selection probability.

Implement Schema Markup

Structured data, much like it does for search engine crawlers, will help AI systems understand what your content means and how different elements relate. Schema markup helps platforms interpret relevance, freshness, and authority more accurately. This can make your content more likely to be surfaced.

Does that mean that structured data is crucial for LLM visibility? No. But it might help indirectly, and it’s good practice to include it anyway. So you should be doing it.

At a minimum, every article should include Article schema with details like:

  • Headline
  • Author
  • Publish Date
  • Last Modified Date
  • Featured Image

This reinforced both relevance and recency.

Other useful types depend on the content’s format. FAQPage schema makes Q&A sections easier for AI systems to extract. HowTo schema works for step-by-step instructions. The organization schema should be applied sitewide to define your brand’s entity, while the Person schema strengthens author credibility.

You don’t need every schema type on every page. The goal is to match the right markup to the right content to help AI systems read, understand, and trust it.

Ensure Crawlability and Accessibility

AI platforms use a process called RAG (Retrieval-Augmented Generation) to find up-to-date information when the information they already have (from their training data) might be outdated. Ensuring your content is crawlable and accessible to crawlers is vital for this to happen well.

Start with the basics:

Maintain accurate XML sitemaps and update them as soon as new content goes live. Include lastmod dates to highlight freshness, and submit them through tools like Google Search Console and Bing Webmaster Tools to ensure they are live on their indexes to help AI systems find them when they are using RAG to find the information they need. At the same time, keep an eye on your robots.txt file. Restrictive rules often block valuable pages by mistake. Only disallow what’s truly private or duplicate, and stress test your setups regularly.

Crawlability also depends on your site’s health. Fix broken links, redirect chains, and server errors systematically. Tools like Screaming Frog or Sitebulb can help catch issues before they prevent indexing.

And don’t overlook the fundamentals. Implement HTTPS sitewide, use clean descriptive URLs instead of messy parameter strings, and ensure your design is fully mobile responsive.

The goal here is to make your content as crawlable and accessible as possible for search engines. This then makes it easier for LLMs like ChatGPT and others to find information when they are using RAG to find information.

Structure Content for AI Readability

How you organize your content directly affects how well AI systems can understand and extract information. A well-structured page isn’t just good for UX, it makes your content easier to parse, surface, and cite.

Start with clear, descriptive headings. Use H1, H2, and H3 tags that accurately summarize each section so they stand alone as meaningful labels. Support this with consistent semantic HTML tags like:

  • <article>
  • <section>
  • <aside>

To give context about the purpose of different blocks.

Formatting matters too. Use proper list markup (<ol> and <ul>) instead of designing lists with dashes in your content.

And include a table of contents for longer pieces. Internal navigation with hyperlinks to different sections helps readers and AI systems quickly locate the right section.

SEOTesting blog table of contents listing sections on advantages of SEO A/B testing, test examples, setup guide, tools, common pitfalls, case studies, and FAQs.

Breaking content into focused, logical chunks also improves extraction accuracy and prevents confusion.

Emphasis should be applied sparingly. Bold and italic text are useful for highlighting key points, but when everything is emphasized, nothing is. The same principle applies to links. Anchor text like “visit our semantic HTML guide” gives far more context than “click here.”

Building Brand Mentions and Authority Signals

Your brand’s presence across the web will help signal authority to AI platforms. These references function as trust signals, even without direct links.

Earn Mentions on High-Authority Sites

AI platforms don’t just look at backlinks. Unlinked brand mentions also influence authority and selection. The more your brand is cited across trusted sources, the stronger the signals of credibility and relevance.

Perhaps one of the best ways to earn these mentions is by contributing expert commentary to journalists. You can do this yourself through platforms like HARO, Qwoted, and Featured. Or you can work with specialist Digital PR teams who will find coverage opportunities and create content that is likely to be featured.

Speaking engagements can also have a similar impact. Send your specialist staff to do conference talks, webinars, and panels. These will often generate mentions across event coverage, social media, and industry blogs.

And, as already mentioned earlier, studies and data-driven insights tend to spread naturally when they’re newsworthy. Especially if you make the data easy to reference or visualize.

Ultimately, building relationships with journalists, editors, and industry publications creates ongoing opportunities for mentions. Becoming a trusted source on specific topics means your brand will surface consistently. Even when no links are included with your commentary.

Brand Mentions vs Citations: Two Paths to LLM Visibility

AI platforms reference sources in two ways: brand mentions and citations. Each of these provides a different value for visibility and authority.

Direct Citations

Direct citations occur when an AI platform attributes specific information to your content, often with a link back to the original source. These are the most tangible signals because they:

  • Drive measurable referral traffic.
  • Provide clear attribution and authority.
  • Typically come from detailed, well-sourced content that answers a question well.

Think along the lines of detailed guides, original research, or deep-dive expert analysis. If you want to attract more citations, make sure you’re following the best practices we’ve already covered in this article around site structure, content, and brand.

This screenshot shows an example of citations from Perplexity:

Perplexity search result for what is content marketing showing definition, key features, how it works, and benefits with highlighted citations from sources like AMA, HubSpot, Salesforce, Mailchimp, StoryChief, and Content Marketing Institute.

Brand Mentions

Brand mentions happen when an AI platform references a company, product, or tool without citing a specific piece of content.

Mentions rarely drive traffic, but they:

  • Reinforce authority and credibility.
  • Build awareness and recognition across industries.
  • Increase the likelihood of being surfaced more in future responses.

The strongest brands consistently receive mentions across query types because of their established reputation and industry presence.

Here’s a brand mention from ChatGPT:

Comparison table of SEO A/B testing and experimentation tools including SearchPilot, SEOTesting, seoClarity, SplitSignal, and SEOScout with their methods, strengths, and considerations.

Why Citations And Brand Mentions Matter

Citations and brand mentions are both hugely important. Citations will generate traffic and some attribution that you can use for your brand’s marketing. While mentions help to strengthen your brand equity over time. Together, they create a well-rounded authority profile.

Monitoring, Testing, and Improving LLM Visibility

Measuring your brand’s visibility on LLMs will require some proper approaches to tracking on your part. Traditional analytics can often miss some significant AI-driven activity, like citations and mentions.

Tracking Traffic and Mentions from LLMs

To understand how AI platforms reference your brand, track both traffic and visibility signals.

Start by monitoring referral traffic in your analytics from sources like chat.openai.com, claude.ai, and perplexity.ai. These clicks indicate successful citations. Pair this with brand monitoring tools like Profound that will help you monitor your brand’s visibility across different LLMs.

SEOTesting’s LLM Traffic Pages Report can help speed this process up. Using your GA4 data, we can show you what pages on your site are being cited from LLMs like ChatGPT, Claude, Perplexity, Gemini, and Copilot:

SEOTesting LLM Traffic Pages report showing sessions from AI assistants like ChatGPT, Claude, Perplexity, Gemini, and Copilot with trend graph and table of top landing pages.

You can go even further by testing queries yourself on AI platforms and documenting when your content shows up. Look for patterns in pages that get cited:

  • Length
  • Depth
  • Structure
  • Freshness

Often play a role.

Tracking competitor mentions is equally important, as it highlights where they may be winning in authority over you and your content.

Testing and Iteration

AI visibility isn’t static, it changes as platforms evolve. The only way that you can stay up to date is through constant testing.

Test different content formats and structures. Does adding FAQ sections to your content mean your content gets mentioned/cited more often? Do long-form guides outperform shorter, focused pieces? Vary depth, headline style, and update frequency to see what AI systems reference most consistently.

On the more technical side, you can test with schema markup, metadata, and structural variations. But be sure to only test one variable at a time to help you isolate results.

SEOTesting’s LLM Test Type can help with this. Make a change, and our tool will use your GA4 data to tell you if that change led to more sessions from LLMs or not.

SEOTesting LLM Test report for sitemap regex showing sessions before and after the test, with breakdown by ChatGPT, Claude, Perplexity, Gemini, and Copilot, and line graph of traffic trends.

The key here is to find what’s working so you can double down on that. And that’s where testing gives you a competitive advantage. Make sure those insights are shared with your wider team, too. That way, the wins don’t stay within your team. They become part of your overall approach.

And don’t stop testing! AI platforms change frequently, so what works today may shift in three months. Controlled experiments can quickly reveal what actually drives improvement.

AI search is reshaping how people find information. Moving from lists of links to proper answers within chat interfaces. In order to make sure your brand is visible, your content needs to be authoritative, accurate, well-structured, and consistently maintained. Earning both citations and brand mentions inside LLM responses is now a crucial part of your marketing strategy.

Treat LLM visibility as its own discipline. Invest in depth, originality, technical accessibility, and continuous testing. Brands that adapt early will not only capture more AI-driven traffic, but also secure long-term authority in this new (growing) era of search.

To help with tests aiming to improve your visibility in AI search, you can use SEOTesting. Our LLM test type will allow you to make changes on your site and monitor whether those changes led to more or less traffic from LLMs. Visit SEOTesting for a free 14-day trial. No credit card required!