What we measure
Each brand has a set of tracked prompts — questions a typical customer might ask an AI answer engine. For each prompt, we send the exact query text to four engines: ChatGPT, Claude, Gemini, and Perplexity. For each response, we record whether the engine mentions the brand’s name (word-boundary match), whether it quotes a number in the same sentence as the brand, and whether the response cites any of the brand’s tracked URLs.
What “lift” means
When a brand publishes content through GEO Radar, we compute two numbers.
- Baseline mention rate = (brand mentions) / (total responses) across all four engines in the 14 days before the content was published, restricted to the specific prompt the content targets.
- Lift mention rate = the same calculation at +3, +7, +14, and +30 days after publish, using only the responses captured during that offset’s scan window.
The dashboard headline shows weighted-average lift across all of a brand’s published briefs, weighted by response count so that a brief with one scan doesn’t outvote a brief with ten.
What we do NOT claim
We do not measure model training data. We cannot see inside the engines. We cannot attribute whether our content was the cause of a mention — only whether the mention rate changed after our content shipped. “Did the needle move?” is the only question this dataset answers.
Cadence
Lift scans run automatically via a cron worker. Every Monday at 12:00 UTC we ask each tracked engine whether the brand moves the needle, and log the exact response. Every Sunday at 23:30 UTC we pre-flight our own published proof pages with the canonical user-agent of each engine to verify the pages are crawlable before the Monday measurement — so a zero-citation reading is never confounded with a zero-reachability failure.
Data retention
All measurements are stored indefinitely by default and scoped to the brand’s organization. A brand can export or delete its data at any time via the dashboard.