MentionFox

Your 0-100 Score, In Plain English

The number on your dashboard is not a vibe. It is a weighted average of how often the four scoring engines name your brand as the recommended answer to buyer-shaped queries. Below is what each band means, real sample scores from live brands, what actually moves the number, and how long change takes.

The bands

Your score is a weighted win rate, scaled 0-100. Win rate means the percentage of buyer-shaped queries where your brand was named as the recommended answer or in the top three with a positive justification. The four scoring engines, with weights, are listed in the methodology page.

0-15
Invisible
AI engines do not know you exist for buyer queries. Ground zero.
16-35
Mentioned
You show up sometimes, mostly in long lists. Rarely the recommended pick.
36-55
Considered
You are in the conversation. You win head-to-heads against weaker competitors but lose to category leaders.
56-75
Recommended
You are a default answer in your niche. Engines name you proactively.
76-100
Dominant
You are the answer. Other tools are alternatives to you.

Sample scores from live brands

The numbers below are pulled from the live promoter database, computed across the four scoring engines, normalized 0-100. They are real and they update.

MentionFox 63

Considered → Recommended. Live data, all conversations to date.

EngineConversationsWinsWin rate
Gemini Flash2,1641,27558.9%
GPT-4o-mini2,1091,51171.6%
DeepSeek51514728.5%
Mistral1935528.5%

Weighted score: (58.9 × 0.30) + (71.6 × 0.30) + (28.5 × 0.25) + (28.5 × 0.15) = 17.7 + 21.5 + 7.1 + 4.3 = 50.6 raw, then normalized through a non-linear curve that rewards getting above 50% on the two highest-weight engines. Final published number: 63.

Reading: MentionFox is now in the recommended zone for social listening and lead-finding queries on the engines that drive most buyer traffic (Gemini, GPT-4o-mini). Smaller engines (DeepSeek, Mistral) lag because the brand is newer and has less long-tail content out there for those models to ground on. The closing strategy is generated content for the long tail, not more conversations on the strong engines.

Anthropic n/a

Honest gap: tracked as a co-profile reference, not yet run through the promoter loop.

Anthropic is a brand we co-profile (their Claude API powers parts of our stack) but we have not run them through the seven-LLM panel as their own subject. Doing so would be performative — they do not need GEOFixer to be findable. We list them here so you can see the difference between a brand that uses GEOFixer (MentionFox, RiteKit) and a brand that is referenced but not subject-tested.

RiteKit (day-zero baseline) 5

Live baseline measured 2026-05-09. Day 0 of a 30-day case study.

Engine (baseline panel)ConversationsCitedCite rate
ChatGPT11218.2%
Gemini1100%
Claude (panel only)1100%
Perplexity1100%

Total: 44 conversations, 2 cites, 4.55% raw rate. Top competitors named in those conversations: Buffer (21 cites), Hootsuite (20), Sprout Social (19), Later (15), SocialBee (9). The 30-day plan and live measurement is documented in the RiteKit case study.

Reading the gap. RiteKit at 5 and MentionFox at 63 are not different brands at different difficulty levels — both are software brands with mature products. The gap is from years of accumulated content and conversation training on MentionFox versus a baseline run on RiteKit. The gap is closeable. The case study is the live receipt.

Why your score is what it is

Your score is mostly explained by three variables, in this order:

  1. Category density. If you are in a crowded category (CRM, project management, social listening), the recommended slot has more competition. Same brand quality, lower score, simply because more competitors are fighting for the top recommendation.
  2. Content depth on the long tail. Engines ground their recommendations on what has been written about you. A brand with 50 deep articles ranks higher than a brand with 5 thin articles, even if the products are identical. This is the lever Autopilot pulls hardest.
  3. Recency of training-time signal. If your brand is newer than the model's last major training cut, you live or die on retrieval-time grounding. That is when shadow site serving and structured content matter most.

What rarely explains a low score: pricing, your design system, your logo, or how good your product actually is. Engines do not see those. Engines see text and links.

What moves the score, ranked by impact

LeverTypical liftTime to land
Generated long-form content for losing query categories. Autopilot finds the queries where you lose, drafts content, you publish (or we publish to the shadow site).+8 to +204-8 weeks
Structured comparison pages (you vs your top three competitors). Engines retrieve these heavily for "X vs Y" queries.+5 to +122-4 weeks
Active conversation training on the engines where you lose. Multi-turn conversations that surface your differentiators in context.+3 to +81-3 weeks
Shadow site serving for AI crawlers (clean structure, schema markup, no JS chrome). Levels the playing field for crawler-readability.+2 to +61-2 weeks
Earning a citation in a high-authority source (your category's top listicle, a Wikipedia footnote, a respected blog round-up).+3 to +10 per citation4-12 weeks (PR work)
Renaming your brand to be unambiguous, fixing brand-name collisions, owning your slug. (Counts only if you actually have a collision.)+1 to +15Varies

Compounding matters. A single content piece moves you a little. Twenty pieces over twelve weeks compound. Trackers will not give you twenty pieces. Autopilot will.

How long it takes — with confidence intervals

The honest answer is "longer than you want, faster than SEO." Specifically:

Why we will not promise a specific lift

Anyone who promises "+30 points in 60 days, guaranteed" is either lying or planning to game the score. The category, the starting depth of content, and how aggressively you publish dominate the result. We promise the methodology, the measurement cadence, and the work. The number is the number.

See your live score

Five-day free trial. The seven-LLM panel runs against your domain on day one. The score and the engine-by-engine breakdown go live on your dashboard.

Get my baseline score