Entry date: March 3, 2026
Overview
- We tested 24 queries across four AI platforms.
- The platforms are ChatGPT, Claude, Perplexity, and Gemini.
- We measured two outcomes: brand mentions and citations.
- We compared results to traffic measured in crawler logs on rozz.genymotion.com.
- We previously called this a “mirror site.” We are dropping that label.
- The AI site is an AI site built for AI agents.
- The AI site is more accurately compared to a mobile site: same content, different audience, different artefact.
- We are not optimizing for citations; we are optimizing for AI agents.
Key findings
- ChatGPT has an 83% citation rate.
- Claude has a 21% citation rate.
- Perplexity has a 17% citation rate.
- Gemini has a 4% citation rate.
The correlation (and where it breaks)
| Platform | Crawling the AI site? | Weekly crawl volume | Citation rate | Brand mentioned |
|---|---|---|---|---|
| ChatGPT (GPT-5.2) | Heavy, sustained since January | 1,200+/week | 83% (20/24) | 96% (23/24) |
| Claude | Just activated (Feb 28) | 505 this week, was 21 | 21% (5/24) | 33% (8/24) |
| Perplexity (sonar-pro) | Light, growing | 42 this week, was 14 | 17% (4/24) | 25% (6/24) |
| Gemini | Not crawling | 0 | 4% (1/24) | 38% (9/24) |
For ChatGPT, Claude, and Perplexity, you can draw a straight line: more crawling = more citations. The AI site is doing what it’s supposed to do for these three. Gemini is something else because it ignores the AI site entirely.
ChatGPT: 14% to 83%
Before we built the AI site, genymotion.com showed up in roughly 14% of relevant AI queries. Eight weeks later: 83%. ChatGPT cites Genymotion in 20 of 24 use-case queries we tested. Brand mentioned in 23 of 24. The only thing that changed is we built an AI site with structured content for AI agents.
Here’s the recap:
- GPTBot’s initial mass crawl occurred (547 requests on January 7).
- This event started this article series.
- This week it produced 1,228 requests, mostly hitting Q&A pages (714), the homepage (307), and GEO content pages (150).
- It moved from training, to indexing, to citing.
- The pages getting cited are consistently on major topics: pricing plans (22 visits this week), macOS compatibility (13), free version availability (13), Google Play Store setup (13).
- When ChatGPT recommends Genymotion, it links directly to genymotion.com pages.
- In most queries, it ranks at position #1.
- There are commonly multiple citation links per response.
- This reinforces the legitimacy and makes the ChatGPT answer a warm referral.
- The remaining work is expanding content coverage into weaker use cases.
- CI/CD and app performance monitoring both came in at 13% citation rate, compared to 63% for app development and manual testing.
Claude: reading the map, not the pages
Two weeks ago in Entry #4, we wrote: “We’ve been waiting for ClaudeBot to come back for three weeks now. It hasn’t.” Well, it did.
- ClaudeBot read the site’s organizational structure. It crawled 13 topic pages in sequence on March 2, one every 20–30 seconds.
- It had done something similar on February 28: a session that mixed topic pages with GEO content pages, for a total of 8 content pages plus 1 topic page in two minutes.
- ClaudeBot is the only bot that systematically crawls topic pages.
- ChatGPT-User hits individual Q&A and GEO pages during live conversations—176 Q&A pages in the last four days alone.
- GPTBot, ByteSpider, PerplexityBot, Meta AI: zero topic page hits.
The topic sweep
- On March 2, ClaudeBot crawled 13 topic pages in sequence.
- The topics include android-versions, hardware-architectures, virtualization-technologies, arm-platform-and-gpu, billing-and-subscriptions, licensing-and-eulas, macos-security-toolkit, documentation-and-support, ci-cd-tools, network-security-toolkit, system-image-and-bios, software-installation-and-trials, root-access-and-tools.
- It had done something similar on February 28: a session mixing topic pages with GEO content pages.
What does “reading the map” actually mean?
- Topic pages are the only pages on the AI site with CollectionPage schema.
- Topic pages list every content page and Q&A in that topic, with titles, descriptions, and links.
- They function as the site’s table of contents: not answers to questions, but a map of what the site knows and how it’s organized.
- By crawling all 13 topic pages, ClaudeBot has a complete picture of the site’s knowledge structure.
- ClaudeBot knows which topics the domain covers, how many pages address each topic, and how individual pages relate to each other.
- We can’t explain why ClaudeBot read zero Q&A pages yet.
- During the same four-day window, ChatGPT-User read 176 Q&A pages.
Three possible explanations
1) ClaudeBot is evaluating before committing.
2) Anthropic is building a different kind of index.
3) The Q&A gap explains the citation gap.
The crawl itself
- Outside content sessions, ClaudeBot runs a monitoring loop: robots.txt plus sitemap.xml every 2–3 hours, always from the same IP.
- The citation test shows Claude cited Genymotion with a link in only 5 of 24 queries (21%), including #1 for a specific “how can support agents replicate mobile app bugs on a virtual device” query and #5 for manual testing.
- Brand mentioned in 8 of 24.
- Three weeks ago ClaudeBot was zero, and three weeks before that ClaudeBot wasn’t crawling at all.
- The trajectory exists. It remains unknown whether ClaudeBot’s structural approach will eventually match ChatGPT’s content-first method.
Perplexity: promising, but we haven’t cracked it yet
- PerplexityBot: 42 requests this week, up from 14; its crawl is light but growing.
- The citation results are 4 of 24 queries (17%).
- The hits cluster where the AI site content is deepest: mobile security, manual testing, app development.
- Brand mentioned in 6 of 24.
- The pattern resembles the early days of ChatGPT: light crawling with citations starting to appear in strongest areas.
- PerplexityBot has its own crawler, its own index, and its own retrieval pipeline.
- This is the same architecture the AI site has proven to work.
- A weekly total of 42 requests is still light.
- We have not yet triggered a deep indexing event for PerplexityBot.
- GPTBot’s jump on January 7 (547 requests in one day) remains a reference moment.
- We are waiting for PerplexityBot’s version of that moment.
Gemini: a completely different game
We originally misinterpreted Gemini’s behavior, and the actual data are more interesting.
What Gemini actually does
- Manual Testing query: Gemini recommends Genymotion as #2; describes its UI, sensor emulation, cloud browser access; mentions it 6 times; zero citation links.
- App Development query: Gemini recommends Genymotion as #2; describes cloud integration, lightweight client, sensor emulation; mentions it 4 times; zero citation links.
- Embed on Website query: Gemini recommends Genymotion for enterprise use; describes WebRTC streaming; hosting; mentions it 6 times; zero citation links.
- Pricing query: Gemini cites genymotion.com at position #1 with 13 citation links; provides exact pricing.
- Gemini does not have a knowledge problem. It has a linking problem.
- It knows Genymotion well enough to recommend with feature details in every query. It does not send anyone to genymotion.com.
The pipeline difference
- Three out of four queries: Gemini answers from its training data; good answers with no links.
- One out of four (pricing): Gemini triggers a live Google Search, grounds its response in web results, and cites via Google’s Vertex AI redirect URLs.
- That is the only query where genymotion.com gets a link.
What this means
- ChatGPT sends traffic.
- Gemini sends word of mouth.
- When ChatGPT recommends Genymotion, there is a direct link to genymotion.com in the response.
- When Gemini recommends Genymotion, there is nothing to click. The user must Google it.
What this means for GEO strategy
- The AI site works for platforms that have their own crawler and retrieval index: ChatGPT, Claude, Perplexity.
- Gemini does not have its own AI crawler. It can rely on training data or Google results.
- GEO strategy requires two tracks: one for crawl-and-cite platforms (ChatGPT, Claude, Perplexity), and another for Gemini, where the rules differ.
Where we are
- ChatGPT: solved. 83% citation rate. Direct links to genymotion.com. The AI site approach works. Remaining work is expanding coverage into weaker use cases.
- Claude: structurally aware, content-light. 21% citation rate, growing. ClaudeBot is the only bot reading the site’s topic taxonomy.
- Perplexity: promising. 17% with light but growing crawl activity. We expect it to follow the ChatGPT curve.
- Gemini: different game. Recommends Genymotion in most queries but does not link. Citations only occur when Gemini triggers a Google search.
Where we are (summary)
- Four platforms, four different stages:
- ChatGPT: solved. 83% citation rate. Direct links to genymotion.com. The AI site approach works.
- Claude: structurally aware, content-light. 21% citation rate, growing.
- Perplexity: promising. 17% with light crawling.
- Gemini: different game. Linking is missing; citations rely on Google.
Data source and authorship
- Data source: CloudFront access logs for rozz.genymotion.com, February 24 – March 3, 2026 (crawl data).
- Citation tests conducted March 2–3, 2026: 24 queries tested on ChatGPT, Claude, Perplexity, and Gemini.
- Bot classification based on User-Agent strings.
Author
Adrien Schmidt, CEO, ROZZ.
- Serial tech entrepreneur with 10+ years of experience building AI systems, including Aristotle (conversational AI analytics), and products for eBay and Cartier.
- Previously founded Squid Solutions and built AI products like Aristotle, the conversational big data analytics chatbot, and an AR jewelry try-on device for Cartier.
March 3, 2026 Data period: Feb 24 – Mar 3, 2026 (crawl data), March 2–3, 2026 (citation tests) rozz@rozz.site © 2026 ROZZ. All rights reserved.