Skip to main content
Skip to main content

Benchmark Confidence & Contamination Flags

Not all benchmark scores are equally trustworthy. BenchLM now separates verified ranking from provisionalranking while still tracking the provenance of every stored score. The confidence indicator (1-4 dots) shows how much sourced benchmark coverage supports each model's score.

●●●●High

7+ categories, 20+ non-generated benchmarks

●●●○Good

5+ categories, 12+ non-generated benchmarks

●●○○Moderate

3+ categories, 8+ non-generated benchmarks

●○○○Low / Estimated

Limited sourced data, score is estimated

Confidence Distribution (Ranked Models)

7

High (6%)

4

Good (4%)

9

Moderate (8%)

89

Low / Estimated (82%)

How BenchLM Scores Work

Verified, provisional, and generated

Each benchmark value is tagged as manual (a hand-entered public row) or generated (inferred from related models). Generated rows are excluded from all public ranking logic. Manual rows are now split again into sourced rows for the verified leaderboard and source-unverified rows that can still appear in provisional mode.

Ranking Eligibility

A model must have at least 8 qualifying benchmarks across 2+ categories to rank in a lane. The provisional leaderboard uses rankable non-generated rows; the verified leaderboard uses sourced rows only. Models below the threshold are shown as tracked but unranked.

Category Eligibility

For category leaderboards, a model needs qualifying scores on at least half of the weighted benchmarks in that category. BenchLM computes this separately for provisional and verified ranking so sparse exact-source coverage cannot silently borrow strength from provisional rows.

Display-Only Benchmarks

Some benchmarks (MMLU, BBH, HumanEval, older AIME/HMMT variants) are shown for context but don't affect scoring. These are either saturated (top models all score 97%+) or have been superseded by harder versions.

ModelConfidenceProv. score
Claude Opus 4.5

Anthropic

●●●●High80
Kimi K2.5

Moonshot AI

●●●●High68
Qwen3.6 Plus

Alibaba

●●●●High77
Qwen3.5 397B

Alibaba

●●●●High66
GLM-5

Z.AI

●●●●High77
Claude Opus 4.6

Anthropic

●●●●High92
GPT-5.4

OpenAI

●●●●High93
Grok 4.20

xAI

●●●○Good77
Gemini 3.1 Pro

Google

●●●○Good94
Claude Mythos Preview

Anthropic

●●●○Good99
GLM-5.1

Z.AI

●●●○Good84
Qwen3.6-35B-A3B

Alibaba

●●○○Moderate64
MiniMax M2.7

MiniMax

●●○○Moderate65
Claude Opus 4.7

Anthropic

●●○○Moderate94
Qwen3.5-27B

Alibaba

●●○○Moderate65
Qwen3.5-35B-A3B

Alibaba

●●○○Moderate59
Qwen3.5-122B-A10B

Alibaba

●●○○Moderate68
GPT-5.4 Pro

OpenAI

●●○○Moderate92
GPT-5.4 mini

OpenAI

●●○○Moderate73
Claude Sonnet 4.6

Anthropic

●●○○Moderate86
GPT-5.2

OpenAI

●○○○Low / Estimated~83
Kimi K2.5 (Reasoning)

Moonshot AI

●○○○Low / Estimated~79
GLM-4.7

Z.AI

●○○○Low / Estimated~72
Claude Sonnet 4.5

Anthropic

●○○○Low / Estimated~68
GPT-5.3 Codex

OpenAI

●○○○Low / Estimated~89
Gemma 4 31B

Google

●○○○Low / Estimated~67
o3-mini

OpenAI

●○○○Low / Estimated~58
Gemini 3 Pro

Google

●○○○Low / Estimated~83
GPT-4.1

OpenAI

●○○○Low / Estimated~60
Gemma 4 26B A4B

Google

●○○○Low / Estimated~58
GPT-4.1 mini

OpenAI

●○○○Low / Estimated~47
Qwen3 235B 2507

Alibaba

●○○○Low / Estimated~35
o1

OpenAI

●○○○Low / Estimated~59
GPT-4.1 nano

OpenAI

●○○○Low / Estimated~28
Gemini 2.5 Pro

Google

●○○○Low / Estimated~67
DeepSeek V3.2

DeepSeek

●○○○Low / Estimated~60
Gemini 3 Pro Deep Think

Google

●○○○Low / Estimated~87
MiMo-V2-Flash

Xiaomi

●○○○Low / Estimated~63
Claude Haiku 4.5

Anthropic

●○○○Low / Estimated~60
Claude 4.1 Opus

Anthropic

●○○○Low / Estimated~53
Claude 4 Sonnet

Anthropic

●○○○Low / Estimated~52
GLM-5 (Reasoning)

Z.AI

●○○○Low / Estimated~84
Qwen3.5 397B (Reasoning)

Alibaba

●○○○Low / Estimated~81
Grok 4.1

xAI

●○○○Low / Estimated~80
GPT-5.1

OpenAI

●○○○Low / Estimated~80
GPT-5 (high)

OpenAI

●○○○Low / Estimated~80
GPT-5.2-Codex

OpenAI

●○○○Low / Estimated~80
GPT-5.1-Codex-Max

OpenAI

●○○○Low / Estimated~79
GPT-5 (medium)

OpenAI

●○○○Low / Estimated~74
Grok 4.1 Fast

xAI

●○○○Low / Estimated~72
o1-preview

OpenAI

●○○○Low / Estimated~68
Gemini 3 Flash

Google

●○○○Low / Estimated~67
Grok 4

xAI

●○○○Low / Estimated~67
DeepSeek V3.2 (Thinking)

DeepSeek

●○○○Low / Estimated~65
o3

OpenAI

●○○○Low / Estimated~60
o3-pro

OpenAI

●○○○Low / Estimated~59
DeepSeek LLM 2.0

DeepSeek

●○○○Low / Estimated~54
DeepSeek Coder 2.0

DeepSeek

●○○○Low / Estimated~53
Qwen2.5-1M

Alibaba

●○○○Low / Estimated~53
Mistral Large 3

Mistral

●○○○Low / Estimated~52
Qwen2.5-72B

Alibaba

●○○○Low / Estimated~52
DeepSeekMath V2

DeepSeek

●○○○Low / Estimated~52
Gemini 3.1 Flash-Lite

Google

●○○○Low / Estimated~51
Qwen3 235B 2507 (Reasoning)

Alibaba

●○○○Low / Estimated~49
Nemotron 3 Ultra 500B

NVIDIA

●○○○Low / Estimated~48
Nemotron 3 Super 100B

NVIDIA

●○○○Low / Estimated~46
o4-mini (high)

OpenAI

●○○○Low / Estimated~46
GPT-4o mini

OpenAI

●○○○Low / Estimated~45
Claude 4.1 Opus Thinking

Anthropic

●○○○Low / Estimated~45
Kimi K2

Moonshot AI

●○○○Low / Estimated~44
Llama 3.1 405B

Meta

●○○○Low / Estimated~43
Claude 3.5 Sonnet

Anthropic

●○○○Low / Estimated~42
Grok Code Fast 1

xAI

●○○○Low / Estimated~42
GPT-4o

OpenAI

●○○○Low / Estimated~41
Sarvam 105B

Sarvam

●○○○Low / Estimated~41
Gemini 2.5 Flash

Google

●○○○Low / Estimated~40
Mistral Large 2

Mistral

●○○○Low / Estimated~40
DeepSeek V3

DeepSeek

●○○○Low / Estimated~38
GPT-OSS 120B

OpenAI

●○○○Low / Estimated~38
Gemini 1.5 Pro

Google

●○○○Low / Estimated~38
Claude 3 Opus

Anthropic

●○○○Low / Estimated~37
DeepSeek-R1

DeepSeek

●○○○Low / Estimated~36
Grok 3 [Beta]

xAI

●○○○Low / Estimated~34
DeepSeek V3.1 (Reasoning)

DeepSeek

●○○○Low / Estimated~33
DBRX Instruct

Databricks

●○○○Low / Estimated~33
o1-pro

OpenAI

●○○○Low / Estimated~30
GLM-4.5

Z.AI

●○○○Low / Estimated~29
Phi-4

Microsoft

●○○○Low / Estimated~29
DeepSeek V3.1

DeepSeek

●○○○Low / Estimated~28
Llama 3 70B

Meta

●○○○Low / Estimated~28
Nemotron 3 Nano 30B

NVIDIA

●○○○Low / Estimated~27
GPT-4 Turbo

OpenAI

●○○○Low / Estimated~27
Z-1

Z

●○○○Low / Estimated~25
Mistral 8x7B

Mistral

●○○○Low / Estimated~25
Gemini 1.0 Pro

Google

●○○○Low / Estimated~25
Llama 4 Scout

Meta

●○○○Low / Estimated~24
Nemotron-4 15B

NVIDIA

●○○○Low / Estimated~24
Moonshot v1

Moonshot AI

●○○○Low / Estimated~24
Claude 3 Haiku

Anthropic

●○○○Low / Estimated~24
Mixtral 8x22B Instruct v0.1

Mistral

●○○○Low / Estimated~23
Nemotron Ultra 253B

NVIDIA

●○○○Low / Estimated~23
GLM-4.5-Air

Z.AI

●○○○Low / Estimated~22
GPT-OSS 20B

OpenAI

●○○○Low / Estimated~20
Gemma 3 27B

Google

●○○○Low / Estimated~18
Llama 4 Maverick

Meta

●○○○Low / Estimated~18
Llama 4 Behemoth

Meta

●○○○Low / Estimated~12
Nova Pro

Amazon

●○○○Low / Estimated~11
Mistral 7B v0.3

Mistral

●○○○Low / Estimated~5
Mistral 8x7B v0.2

Mistral

●○○○Low / Estimated~2

Sourced = exact-source benchmark coverage. Rankable = non-generated benchmark coverage used by the provisional leaderboard. Generated = inferred from related models and excluded from ranking. Coverage = sourced share of the visible benchmark footprint.

Frequently Asked Questions

What is benchmark confidence on BenchLM?

Score confidence (1-4 dots) indicates how much sourced benchmark data supports a model's score. A 4-dot score is backed by 20+ sourced benchmark rows across 7+ categories. A 1-dot score relies on limited sourced coverage, and the provisional leaderboard may still include source-unverified non-generated rows. The confidence system helps you distinguish between well-tested models and those with sparse coverage.

What does "estimated" mean on BenchLM scores?

Scores marked with "Est." or "~" are derived from limited sourced data. Generated rows are excluded from ranking inputs, but the provisional leaderboard may still rely on source-unverified non-generated public rows until exact citations are attached. The verified leaderboard avoids that by using sourced rows only.

How does BenchLM detect contamination risk?

BenchLM tracks two key signals: (1) benchmark provenance — whether each score is a hand-entered public row ("manual") or was generated/inferred from related data, and (2) benchmark freshness — older benchmarks that haven't been updated are more likely to have been contaminated through training data inclusion. Models with mostly generated data or stale benchmarks receive lower confidence ratings. Exact-source verification is tracked separately from this manual-vs-generated split.

What is benchmark provenance?

Provenance tracks the origin of each benchmark score. "Manual" scores are hand-entered public rows from BenchLM's dataset work. "Generated" scores were inferred from related models or interpolated. BenchLM now distinguishes provisional ranking, which can use non-generated manual rows, from verified ranking, which only uses exact-source-attached rows.

Which LLM benchmarks are most reliable?

Fresh, held-out benchmarks like SWE-Rebench (rolling window), Terminal-Bench 2.0, and HLE are the hardest to game. Older, saturated benchmarks like MMLU (where top models all score 97-99%) provide little signal. BenchLM weights newer, harder benchmarks more heavily and flags saturated ones as display-only.

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.