How We Verify Publications

Methodology for the MentionFox Publication Vetting Report. Updated 2026-05-10.

The MentionFox Publication Vetting Report is a research synthesis built from public-record sources. It is NOT a regulatory verification of record, NOT an editorial-quality judgment, NOT a fact-check of any specific story, and NOT a substitute for direct ownership-disclosure access through the publication's parent-company filings. Read this page in full before relying on the report for a hiring, sourcing, or investment decision.

Publication-level diligence sits at the intersection of company diligence and editorial diligence. The report combines Crunchbase company-level data with editorial-track-record signals (masthead, RSS, byline distribution, awards) and traffic / audience analytics where public. The result is a single citation-rich document that compresses the public record on a publication into a structured form.

Who reads these reports

Five use cases drive the report's framing:

Subject resolution

Publication resolution begins with the publication's primary domain. Domain-match is the canonical key.

  1. Primary domain. When provided directly (e.g. nytimes.com, platformer.news), we resolve the canonical Crunchbase company record, Wikipedia article (when available), and masthead URL.
  2. Publication name. When the domain is not provided, we run the publication name against Crunchbase and disambiguate via parent company, founding date, primary beat, and editor-in-chief.
  3. Disambiguation fallback. When the candidate set is large (e.g. multiple "Tribune" newspapers across cities), additional disambiguators surface (city, parent company, founding era). We never auto-pick from a multi-candidate result.

Data sources — what we use

SourceWhat it tells usClass
CrunchbaseCompany record. Founding date, employees, funding rounds where applicable, current investors, parent company.Authoritative-Secondary
Masthead and about pagesEditorial leadership, executive editor, contributing editors, editorial-staff composition.Authoritative-Secondary
RSS feedsPost cadence, beat consistency, byline distribution, multi-author versus single-author footprint.Authoritative-Secondary
WikipediaFounding history, ownership-change record, controversies of record, awards.Aggregator
Similarweb / Alexa trafficEstimated monthly unique visitors, top-traffic countries, traffic-trend signal. Tagged as estimates.Aggregator
News mentions across other publicationsHow peer publications describe this publication. Awards mentions, controversy mentions, citations of work.Authoritative-Secondary
SEC EDGARFor publicly traded media parents (NYT Co, News Corp, Gannett), 10-K and 10-Q disclosures of subsidiary publications, segment reporting, audience figures.Federal-Primary
IRS 990 filingsFor nonprofit publications (ProPublica, Texas Tribune, Mother Jones), IRS 990 disclosures of revenue, executive compensation, foundation grantors.Federal-Primary
Public corrections archiveWhere the publication maintains a public corrections page, the corrections cadence and severity over time.Authoritative-Secondary
Substack publicationsPost cadence, paid-tier presence, subscriber-count signal when public, editorial-team composition.Authoritative-Secondary

What's NOT used (and why)

Source class hierarchy (ICD 206)

Each cited source falls into one of three classes, weighted differently when the synthesis evaluates evidence strength:

  1. Federal-Primary — directly authored by a US federal agency. SEC EDGAR for publicly traded parents, IRS 990 filings for nonprofits.
  2. Authoritative-Secondary — masthead and about pages, RSS feeds, Crunchbase, news coverage in peer publications, public corrections archives, Substack publication metadata.
  3. Aggregator — Similarweb / Alexa traffic estimates, Wikipedia, public review aggregators. Treated as estimates and triangulation signals, never as adjudicated truth.

Confidence ratings (ICD 203)

Where a section asserts a probabilistic claim (e.g. about audience growth, editorial-staff stability, or parent-company strategy), it uses the UK PHIA probability vocabulary (almost no chance / very unlikely / unlikely / roughly even chance / likely / very likely / almost certain). Bands are picked based on data-density. When evidence is thin, the band defaults to "roughly even chance" with an explicit "[insufficient public evidence]" tag.

Defamation guardrails

Publication verification carries non-trivial defamation risk: a false claim that lowers a publication's editorial standing can be defamatory. The synthesis follows a strict cite-don't-characterize policy:

Section-by-section methodology

1. Executive Summary

Generated last. Pulls verdict-relevant facts from each prior section: parent company, primary beat, audience size band, editorial leadership, awards, and any public-reputation flags.

2. Publication Profile

Founding date, current ownership, primary editorial focus, current editor-in-chief, headquarters. Sourced from Crunchbase, masthead, and Wikipedia.

3. Editorial Leadership

Masthead position, executive-editor history, contributing-editor list, editorial-board composition. Sourced from masthead pages and Wayback Machine snapshots.

4. Beat Coverage and Editorial Identity

Topic clustering across recent posts. Editorial-identity consistency over time. Cross-beat work patterns. Sourced from RSS feeds and direct site analysis.

5. Audience and Reach

Estimated monthly traffic from Similarweb / Alexa. Subscriber-count signal where publicly disclosed. Top-traffic countries. Tagged as estimates.

6. Ownership and Corporate Structure

Parent-company structure. For publicly traded media, segment reporting from 10-K. For nonprofit media, 990 filings and major foundation funders. For VC-backed media, Crunchbase funding rounds.

7. Editorial Track Record

Awards (Pulitzer, Polk, Loeb, ONA, IRE, Murrow), publication-level recognition, citations of work in other publications.

8. Corrections and Ethics Signal

Public corrections archive cadence. Public retraction record where available. Quoted verbatim from the publication's own pages.

9. Comparable Publications

Five archetype-matched peers from a curated reference. Beat overlap, audience size, ownership structure, founding era.

10. Public Reputation and Press

News coverage of the publication itself across peer publications. Severity-ranked findings or an honest "no public reputation concerns" when the public record is clean.

11. Editorial-Staff Stability

Departures and hires across the prior 24 months. Masthead changes via Wayback Machine snapshots. Beat-team continuity.

12. References and Source Citations

Aggregated audit trail of every URL cited across the prior 11 sections, deduplicated and grouped by source class.

Limitations + what this report is NOT

Verifiability

Every claim in a Publication Vetting Report cites a public URL the reader can verify. Claims without citations do not appear — replaced with the [insufficient public evidence as of {date}] tag. The reports are auditable: a brand-safety reviewer or media-investment committee can re-run the verification chain by hand from the citations alone.

Related verifications

Run a Publication Vetting Report yourself.
Order a Publication Snapshot →