Skip to main content

Massive Multi-discipline Multimodal Understanding (MMMU)

A broad multimodal reasoning benchmark spanning charts, diagrams, tables, and academic visual question answering.

Benchmark score on MMMU — April 10, 2026

BenchLM mirrors the published score view for MMMU. Qwen3.6 Plus leads the public snapshot at 86.0% , followed by Qwen3.5-122B-A10B (83.9%) and Qwen3.5-27B (82.3%). BenchLM does not use these results to rank models overall.

5 modelsMultimodal & GroundedRefreshingDisplay onlyUpdated April 10, 2026

The published MMMU snapshot is tightly clustered at the top: Qwen3.6 Plus sits at 86.0%, while the third row is only 3.7 points behind. The broader top-10 spread is 53.3 points, so the benchmark still separates strong models even when the leaders cluster.

5 models have been evaluated on MMMU. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MMMU is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About MMMU

Year

2024

Tasks

Multimodal academic reasoning

Format

Image + text question answering

Difficulty

Frontier multimodal

MMMU is the base benchmark family behind later MMMU-Pro variants. It measures whether a model can answer expert-style questions that require combining visual understanding with domain knowledge and reasoning.

BenchLM freshness & provenance

Version

MMMU 2024

Refresh cadence

Annual

Staleness state

Refreshing

Question availability

Public benchmark set

RefreshingDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (5 models)

1
86.0%
2
83.9%
3
82.3%
4
81.4%
5
32.7%

FAQ

What does MMMU measure?

A broad multimodal reasoning benchmark spanning charts, diagrams, tables, and academic visual question answering.

Which model scores highest on MMMU?

Qwen3.6 Plus by Alibaba currently leads with a score of 86.0% on MMMU.

How many models are evaluated on MMMU?

5 AI models have been evaluated on MMMU on BenchLM.

Last updated: April 10, 2026 · BenchLM version MMMU 2024

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.