Skip to main content

MMLU-Pro first-party comparison snapshot (MMLU-Pro (Arcee))

A display-only MMLU-Pro reference from Arcee AI's Trinity-Large-Thinking launch chart.

Benchmark score on MMLU-Pro (Arcee) — April 16, 2026

BenchLM mirrors the published score view for MMLU-Pro (Arcee). Claude Opus 4.6 leads the public snapshot at 89.1% , followed by Kimi K2.5 (87.1%) and GLM-5 (85.8%). BenchLM does not use these results to rank models overall.

5 modelsKnowledgeCurrentDisplay onlyUpdated April 16, 2026

The published MMLU-Pro (Arcee) snapshot is tightly clustered at the top: Claude Opus 4.6 sits at 89.1%, while the third row is only 3.3 points behind. The broader top-10 spread is 8.3 points, so many of the published scores sit in a relatively narrow band.

5 models have been evaluated on MMLU-Pro (Arcee). The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MMLU-Pro (Arcee) is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About MMLU-Pro (Arcee)

Year

2026

Tasks

Professional academic QA

Format

10-way multiple choice

Difficulty

Professional level

BenchLM stores this chart-specific MMLU-Pro row separately so it does not overwrite the standardized weighted MMLU-Pro benchmark values.

BenchLM freshness & provenance

Version

MMLU-Pro (Arcee) 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (5 models)

1
89.1%
2
87.1%
3
85.8%
4
83.4%
5
80.8%

FAQ

What does MMLU-Pro (Arcee) measure?

A display-only MMLU-Pro reference from Arcee AI's Trinity-Large-Thinking launch chart.

Which model scores highest on MMLU-Pro (Arcee)?

Claude Opus 4.6 by Anthropic currently leads with a score of 89.1% on MMLU-Pro (Arcee).

How many models are evaluated on MMLU-Pro (Arcee)?

5 AI models have been evaluated on MMLU-Pro (Arcee) on BenchLM.

Last updated: April 16, 2026 · BenchLM version MMLU-Pro (Arcee) 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.