Canonry - The ultimate AEO monitoring tool, open source.

Arber X · March 13, 2026 · 5 min read

GitHubnpm

When we starting doing AEO work, the tools available to us were limiting, proprietary and expensive. We needed something better. We built Canonry not only as a tool we could reliably use for ourselves, but also to make AEO monitoring accessible to everyone!

What even is AEO monitoring?

AEO (Answer Engine Optimization) monitoring is the process of seeing how "answer engines" cite your website in their search results while tracking the performance of your website across various dimensions over time for specific key phrases. Canonry lets you input your domain, key phrases to monitor and then queries the web search functionalities of LLMs (OpenAI, etc.) to gather data on how your website is being cited.

This is extremely useful in a variety of ways:

  • You get to see if you are being cited correctly by answer engines.
  • You can track if your website has authority to questions and queries.
  • You can identify gaps in your content strategy and fill them.
  • Do research on how different LLMs parse and understand search queries.

A practical example

You go to chatGPT and type in "AEO Agency NYC", you're looking to find an agency that specializes in AEO. How does chatGPT find the right answer? What answers does it cite? Lets look at an example:

AEO Agency NYC search results

The above shows a chatGPT search result for "AEO Agency NYC" on March 12th, 2026. Things to notice here:

  • Only three results are shown
  • ChatGPT links to the websites of the top results, showing the title, snippet, and URL.

Now, this is one snapshot of a certain query at a certain time. What if you make changes to your website? How will that affect the search results? What if you search for "NYC AEO Agency" instead, what results will you get? Canonry helps you monitor these changes over time and track the impact on your website's visibility.

Canonry

Canonry is open source software on Github. Right now, Canonry is meant only for technical users (however claude can definitely help set it up for you locally!).

Agent First

One of the many cool features of Canonry is its agent-first approach. Everything that's configurable in the web UI is also available via API and CLI. This means you can automate your monitoring and analysis using scripts or integrate it into your existing workflows.

Getting Started

When you run Canonry, you're met with the home page:

Canonry dashboard

Here you setup providers (LLM APIs like Gemini, OpenAI, Claude or a local LLM). All of this using your own API keys! Then more importantly you configure your domain, which becomes your project:

Canonry domain configuration

Next, the most important part, the key phrases and potential competitors you want to track: Canonry key phrases

Canonry competitors

These phrases and competitors are what Canonry tracks over time to monitor your website's visibility. They can be updated at any time to reflect changes in your strategy. This is the key to how Canonry helps you stay on top of your website's visibility. When conducting a search across all the providers you configured for your key phrases, Canonry not only looks for your website's citation but also for your competitors' citations. This way, you can see how your website is performing relative to your competitors.

And finally, you trigger your first run! This leads you to your project dashboard where you can see your website's visibility over time, trigger runs, setup scheduled runs, webhook alerts and more!

Canonry project dashboard

Lets look at some useful data

If I expand one of the key phrases in the visability dashboard, I can see a breakdown of how I was cited across all providers configured across all runs I've made with changes noticed. For example, for ainyc.ai, for the key phrase: "AEO Agency NYC", I can see that I just started to get cited by claude in the last two runs: Canonry citation breakdown

Further, I can view the specific evidence for each run to understand how I was cited. For example, I can see the exact text that was cited and the URL where it was found. Canonry citation evidence

Future State

Canonry already handles multi-provider visibility runs, scheduling, webhooks, config-as-code, and a full API surface. But we are just getting started. The full roadmap is public, and here are the highlights:

Coming next: core metrics

Right now Canonry tracks binary cited/not-cited. The immediate priority is richer metrics:

  • Share of Voice (SOV). The single most requested AEO metric. SOV = (runs where cited / total runs) as a percentage, computed per keyword and aggregated per project. This makes Canonry dashboards immediately comparable to paid tools.
  • Citation position and prominence tracking. Record where in the answer your domain appears and whether it shows up in the first paragraph. This transforms flat binary tracking into ranked visibility.
  • Competitor SOV comparison. Extend SOV to show how your competitors perform alongside you for each keyword. Answers "who is winning the AI answer war for this keyword?"
  • Sentiment classification. Classify mentions as positive, neutral, or negative. There is a big difference between "Brand X is the industry leader" and "Brand X has been criticized for..."
  • Results CSV/JSON export. Export snapshot data as CSV for BI tool integration (Excel, Looker Studio, Tableau) without API coding.

Deeper analysis and new providers

  • Perplexity provider. Engine coverage from 3 to 4+ providers using Perplexity's OpenAI-compatible API.
  • Answer diff viewer. Side-by-side comparison of how AI answers changed over time for the same query. Even most paid tools do not show full answer diffs.
  • Site audit integration. Wire in @ainyc/aeo-audit to give every project a Technical Readiness score alongside Answer Visibility. Two score families in one dashboard.
  • Content optimization recommendations. For keywords where you are not cited, analyze what sources were cited and why, then generate actionable recommendations to close the gap.
  • Anomaly detection and smart alerts. Track rolling SOV averages and alert only when SOV drops or spikes beyond a configurable threshold, reducing noise.

Long-term initiatives

  • Google AI Overviews provider. Track visibility in Google's AI Overview snippets.
  • Historical trend analytics and forecasting. Time-series analytics over SOV, sentiment, and citation position with 7/30/90 day trends.
  • Integrations ecosystem. Slack alerts, Google Sheets export, Looker Studio data source, and Zapier/n8n webhook documentation.

All of this will remain open source. The full roadmap includes a priority matrix and implementation details for every feature. If you want to contribute or follow along, check out the GitHub repo.

Is Canonry free to use?

Yes. Canonry is free to use under the FSL-1.1-ALv2 (Functional Source License). The code is publicly available and converts to Apache 2.0 after two years. You run it locally with your own API keys, so the only cost is the LLM API usage from the providers you configure.

Which AI providers does Canonry support?

Canonry supports OpenAI, Google Gemini, Anthropic Claude, and any local LLM with a compatible API. You can configure multiple providers and compare citation results across all of them.

Do I need to be technical to use Canonry?

Right now, yes. Canonry requires a local setup with Node.js and your own API keys. We plan to add a hosted option in the future, but for now it is designed for developers and technical marketers.

How is Canonry different from other AEO monitoring tools?

Most AEO monitoring tools are proprietary and expensive. Canonry is open source, agent-first, and uses your own API keys. You own your data and can automate everything through the CLI or API.

Try it yourself.

Run a free AEO audit to see how your site scores, or explore the tools and pages referenced in this article.