We turned MentionFox Aggressive Autopilot loose on ritekit.com on 2026-05-09. No human in the loop, no hand-tuned content. Goal: see how 30 days of auto-driven AI visibility work moves the needle vs the day-0 baseline.
Per-LLM win rate, top cited competitors, and sample winning + losing conversations.
Read the baseline →[Day 7 TBD — measurement in progress]
[Day 14 TBD — measurement in progress]
[Day 21 TBD — measurement in progress]
[Day 30 TBD — measurement in progress]
Every snapshot uses the same fingerprint so the deltas are comparable.
It's the same engine MentionFox runs for itself, just unsupervised. Aggressive mode pushes the LLM allocation toward the cheaper, faster models (gemini_flash + gpt4omini do the bulk of conversations), keeps Claude switched off to control cost, and lets the question bank cycle without manual approval. Every winning conversation feeds the next prompt's context. Every losing one feeds the next round of GEO content. Thirty days, no human babysitter — that's the test.