This reporting page is intentionally narrow. It focuses on currently tracked sourced factuality signals such as SimpleQA, HLE without tools, and multimodal factuality. It is a reporting page, not a mature weighted category.
This page ranks models using only sourced factuality benchmarks in the reporting family.
Bottom line: Factuality benchmarks are intentionally narrow — SimpleQA and HLE-no-tools are the primary signals. Claude Mythos Preview leads, but this category is still maturing.
According to BenchLM.ai, Claude Mythos Preview leads this ranking with a score of 56.8, followed by Gemini 3.1 Pro (45.4) and Muse Spark (42.8). There is a significant gap between the leading models and the rest of the field.
The best open-weight option is Gemma 4 31B (ranked #10 with a score of 19.5). While proprietary models lead, open-weight options are within striking distance for teams willing to trade a few points of performance for full model control.
This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.
Claude Mythos Preview
Anthropic · 1M
Best factuality score. Leads SimpleQA and HLE-no-tools.
Gemini 3.1 Pro
Google · 1M
Strong factual accuracy without chain-of-thought overhead.
Muse Spark
Meta · 262K
Claude Mythos Preview leads factuality with the best SimpleQA and HLE-no-tools scores.
Gemini 3.1 Pro strong factuality for a non-reasoning model.
GPT-5.4 solid SimpleQA performance, especially on knowledge-heavy queries.
Get notified when models move. One email a week with what changed and why.
Free. No spam. Unsubscribe anytime.
The top model on this sourced reporting-family slice is Claude Mythos Preview by Anthropic with an average of 56.8.
The best open-weight model is Gemma 4 31B at position #10.
11 models are listed with sourced benchmark coverage in this reporting family.
This is a reporting family ranking, not a weighted category. It averages sourced factuality benchmarks to give a focused view of this capability.
Models must have sourced results on at least a quarter of the benchmarks in this family to be included. Coverage varies — a model with 2 benchmark scores is less reliable than one with 5.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.