Guide
How to Rank on ChatGPT in 2026
"Ranking on ChatGPT" is not the same as ranking on Google. There are no positions, no pages of results, and no real-time bidding. When someone asks ChatGPT a question about your industry, it either mentions you or it does not.
We built an open-source tool called canonry to measure this. Canonry asks AI models the same queries your customers would ask, records whether they mention a specific business, and tracks how those answers change over time. Each check is called a "run." We tracked 11 keywords across 66 runs over two weeks for a local service business. The data paints a clear picture of what works and what does not.
The numbers: what citation monitoring shows
Here is what citation rates look like across different query types:
| Query type | Example | Citation rate |
|---|---|---|
| Branded + location | "[business type] [city]" | 82-90% |
| Generic + location | "[industry] agency [city]" | 31% |
| Competitive | "best [industry] agency [city]" | 4% |
| Informational | "how to [do something]" | 0% |
The pattern is stark. When the query closely matches your brand + location, models cite you most of the time. When the query is generic or informational, citation drops off a cliff. For "how to rank on ChatGPT" specifically, we have 0 citations across 20 runs. Models answer with generic advice or cite Semrush, Neil Patel, and Search Engine Journal instead.
This tells us two things:
- Entity strength matters. If AI models have a strong entity representation of your business, they will recommend you for branded queries.
- Content gaps are real. If you have not published content that directly targets an informational query, you will not get cited for it regardless of how strong your brand is.
How ChatGPT decides what to recommend
ChatGPT uses two sources:
- Training data. The model knows about you if you had web presence before the training cutoff.
- Web browsing. ChatGPT browses the web in real time using its own crawler (OAI-SearchBot) and a retrieval system that has been observed pulling from both Bing and Google. The exact mix is not fully public and appears to evolve.
The browsing path is where most businesses should focus. You cannot retroactively change training data, but you control what ChatGPT finds when it browses.
Because ChatGPT's retrieval system draws from multiple search engines, broad indexing matters. Perplexity also runs its own real-time search, and Claude has web search capabilities too. If you have only submitted your sitemap to Google, submit it to Bing Webmaster Tools today. Being indexed by both Google and Bing gives you the best coverage across all AI providers.
The citation volatility problem
One of the most useful findings from the monitoring data: citations are not stable. Even for queries where a site is well-positioned, the model drops it roughly 1 in 5 times.
For the strongest branded keyword in the dataset, here is the loss/recovery pattern over two weeks:
- Mar 14: Lost, recovered within 24 hours
- Mar 18: Lost, recovered same day
- Mar 23: Lost, recovered next day
- Mar 26: Lost, recovered within hours
- Mar 27: Lost, recovered within hours
Every single loss was followed by a recovery. The model did not permanently forget the business. It simply has natural variance in how it constructs responses.
The practical implication: do not panic over a single check. If you ask ChatGPT your target query once and it does not mention you, that is not necessarily a problem. You need trend data, not snapshots. This is why automated monitoring matters. Checking once tells you almost nothing. Checking 66 times tells you your actual citation rate.
Make sure ChatGPT can find you
Check your robots.txt for OAI-SearchBot:
User-agent: OAI-SearchBot
Allow: /
The OpenAI documentation lists all their crawler user agents. Blocking OAI-SearchBot makes you completely invisible to ChatGPT's web browsing.
Structure content for extraction
When ChatGPT browses a page, it extracts chunks and synthesizes them. Pages that are easy to extract from get cited more. The @ainyc/aeo-audit tool (which you can run on any URL) measures this with a Content Extractability factor that scores how easy it is for an AI model to pull clean facts from your page.
In the audit data, one site scored 65/100 on extractability despite scoring 87/100 on content depth. Plenty of content, but the markup made it hard to parse. Another site scored 45/100 on extractability with 72/100 on depth. The gap between "content exists" and "content is extractable" is real.
What works:
- Lead with the answer. If your page targets "commercial roof coatings in Michigan," the first paragraph should state what you do, where, and why. Not a company history.
- Question headings. "How much does commercial roof coating cost?" is more extractable than "Pricing Information." Models map user queries to headings.
- Short paragraphs. Two to four sentences. Models extract paragraph-level chunks.
- Specific numbers. "200+ projects since 2019" is more citable than "extensive experience."
Add structured data
In the aeo-audit scoring framework, structured data is the single most weighted factor (12 points out of 100). The site scoring 90/100 overall has perfect schema markup (LocalBusiness, Service, FAQPage, HowTo). The site scoring 48/100 has a 42/100 on structured data and zero AI citations across 23 tracked keywords.
Priority schemas:
- LocalBusiness with name, address, geo, service area, hours
- Service for each service, linked to parent business
- FAQPage on pages with Q&A content
- AggregateRating if you have reviews
The schema markup guide has copy-pasteable JSON-LD for each type. Google's Rich Results Test validates your implementation.
Build external authority
A business mentioned only on its own website is less likely to be cited than one that appears across directories, review sites, and press.
Practical authority signals:
- Google Business Profile with complete info and recent reviews
- Industry directories relevant to your vertical
- Review platforms like Yelp, Trustpilot, BBB
- Backlinks from authoritative domains
This is the same citation-building work local SEO has always emphasized. The difference is AI models use these signals for entity resolution, not just PageRank.
Definition blocks: the most overlooked factor
In the aeo-audit scoring, definition blocks have a weight of 6 and most sites score terribly on them. One site in the dataset scores literally 0/100 because no page opens with a direct definition of what the business does.
A definition block is simple: "X is Y. It does Z for W."
If someone asks ChatGPT "what is [your service]," the model needs a sentence to pull. If your homepage starts with "Welcome to our company" instead of "[Company Name] is a [service type] provider serving [location]," you are making the model guess. Models do not guess when they have better options.
What to do this week
- Check robots.txt for OAI-SearchBot blocks
- Submit sitemap to Bing Webmaster Tools
- Add LocalBusiness schema to your homepage
- Rewrite your main service page with a definition block in the first paragraph
- Run a free AEO audit to see your score across all 13 factors
Then start monitoring. Not once. Repeatedly. The loss/recovery patterns we described above are only visible over time. Ask ChatGPT, Gemini, Perplexity, and Claude your target queries weekly, or set up canonry to automate it across all four.
npx @ainyc/aeo-audit@latest "https://yourbusiness.com"
The audit gives you a baseline. The monitoring tells you if your changes are working. Both are open source.
FAQ
Can I pay to rank on ChatGPT?
No. There is no paid placement in ChatGPT responses. In our monitoring data, citations correlate with structured data quality, content extractability, and external authority signals, not advertising spend.
Does ChatGPT use Google results?
No. ChatGPT browses the web using its own retrieval system, which has been observed pulling from both Bing and Google. It also has its own crawler (OAI-SearchBot). The exact mix of sources is not fully public and appears to change over time.
How often does ChatGPT update its knowledge?
ChatGPT has a training data cutoff that updates with each model release. The browsing feature pulls live information from the web (the exact sources are not fully public). In our monitoring, we see ChatGPT answers change day to day for the same query, suggesting it re-fetches frequently.
Does my Google ranking affect ChatGPT?
Indirectly. Strong SEO signals correlate with AI citation, but ChatGPT has its own retrieval system that does not map 1:1 to Google rankings. In our data, we track queries where sites rank top 3 on Google but get zero ChatGPT citations.
Try it yourself.
Run a free AEO audit to see how your site scores, or explore the tools and pages referenced in this article.