A display-only MMLU-Pro reference from Arcee AI's Trinity-Large-Thinking launch chart.
BenchLM mirrors the published score view for MMLU-Pro (Arcee). Claude Opus 4.6 leads the public snapshot at 89.1% , followed by Kimi K2.5 (87.1%) and GLM-5 (85.8%). BenchLM does not use these results to rank models overall.
Claude Opus 4.6
Anthropic
Kimi K2.5
Moonshot AI
GLM-5
Z.AI
The published MMLU-Pro (Arcee) snapshot is tightly clustered at the top: Claude Opus 4.6 sits at 89.1%, while the third row is only 3.3 points behind. The broader top-10 spread is 8.3 points, so many of the published scores sit in a relatively narrow band.
5 models have been evaluated on MMLU-Pro (Arcee). The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MMLU-Pro (Arcee) is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Professional academic QA
Format
10-way multiple choice
Difficulty
Professional level
BenchLM stores this chart-specific MMLU-Pro row separately so it does not overwrite the standardized weighted MMLU-Pro benchmark values.
Version
MMLU-Pro (Arcee) 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A display-only MMLU-Pro reference from Arcee AI's Trinity-Large-Thinking launch chart.
Claude Opus 4.6 by Anthropic currently leads with a score of 89.1% on MMLU-Pro (Arcee).
5 AI models have been evaluated on MMLU-Pro (Arcee) on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.