Probe
We fetch robots.txt for all five AI bots (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, CCBot), pull your sitemap, check Common Crawl presence, and sample homepage meta + schema. About 12 seconds.
◉ HOW IT WORKS
Veezow answers one question: can AI systems crawl, understand, and cite your site? Here's the four-step loop every scan runs.
We fetch robots.txt for all five AI bots (GPTBot, ClaudeBot, PerplexityBot, Google-Extended, CCBot), pull your sitemap, check Common Crawl presence, and sample homepage meta + schema. About 12 seconds.
You get an AI Visibility Score out of 100, a per-engine breakdown of what can read your site today, and a priority-ordered list of specific fixes — not vague advice.
Apply the top three fixes this week. Most teams ship in under a day: unblock a bot, tighten a canonical, publish an Organization schema, add a missing sitemap entry.
Sign up (free) and we re-scan your tracked domains every week. Get alerts the moment ChatGPT stops citing you, or a robots.txt change accidentally blocks ClaudeBot.
◉ WHAT WE READ
Every URL path × every AI bot user-agent — parsed to the same precedence rules the bots follow.
Presence, size, freshness, and canonical-vs-origin consistency across the tree.
Whether CCBot has actually indexed your content — a proxy for AI training-data reach.
Meta robots, canonical, JSON-LD structured data, Open Graph, Twitter card.
Wikipedia page, Wikidata entity, Wayback Machine history, Reddit/HN mentions.
Open PageRank as a stable external proxy for cross-engine trust.