Skip to main content

Massive Multitask Language Understanding Professional (MMLU-Pro)

An enhanced version of MMLU with 10 answer choices instead of 4, featuring more reasoning-focused questions that better differentiate frontier models.

Top models on MMLU-Pro — April 10, 2026

As of April 10, 2026, Claude Opus 4.5 leads the MMLU-Pro leaderboard with 89.5% , followed by Qwen3.6 Plus (88.5%) and Qwen3.5 397B (87.8%).

21 modelsKnowledge22% of category scoreRefreshingUpdated April 10, 2026

According to BenchLM.ai, Claude Opus 4.5 leads the MMLU-Pro benchmark with a score of 89.5%, followed by Qwen3.6 Plus (88.5%) and Qwen3.5 397B (87.8%). The top models are clustered within 1.7 points, suggesting this benchmark is nearing saturation for frontier models.

21 models have been evaluated on MMLU-Pro. The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. Within that category, MMLU-Pro contributes 22% of the category score, so strong performance here directly affects a model's overall ranking.

About MMLU-Pro

Year

2024

Tasks

Multiple subjects

Format

10-way multiple choice

Difficulty

Professional level

MMLU-Pro increases the number of choices from 4 to 10 and integrates more reasoning-focused problems, reducing the chance of correct guessing and better evaluating true understanding. It serves as a more robust discriminator of model capabilities.

BenchLM freshness & provenance

Version

MMLU-Pro

Refresh cadence

Static

Staleness state

Refreshing

Question availability

Public benchmark set

Refreshing

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Leaderboard (21 models)

1
89.5%
2
88.5%
3
87.8%
4
87.1%
5
87.1%
6
86.7%
7
86.1%
8
85.7%
9
85.3%
10
85.2%
11
84.9%
12
84.3%
13
83%
14
82.6%
15
82%
16
81.8%
17
79.2%
18
75.9%
19
69.4%
20
60%
21
19.3%

FAQ

What does MMLU-Pro measure?

An enhanced version of MMLU with 10 answer choices instead of 4, featuring more reasoning-focused questions that better differentiate frontier models.

Which model scores highest on MMLU-Pro?

Claude Opus 4.5 by Anthropic currently leads with a score of 89.5% on MMLU-Pro.

How many models are evaluated on MMLU-Pro?

21 AI models have been evaluated on MMLU-Pro on BenchLM.

Last updated: April 10, 2026 · BenchLM version MMLU-Pro

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.