Massive Multi-discipline Multimodal Understanding (MMMU)

A broad multimodal reasoning benchmark spanning charts, diagrams, tables, and academic visual question answering.

Top Models on MMMU — March 2026

As of March 2026, Gemini 3 Pro leads the MMMU leaderboard with 87.2% , followed by GPT-5.2 (86.7%) and Qwen3.6 Plus (86.0%).

6 modelsMultimodal & GroundedRefreshingDisplay onlyUpdated April 2, 2026

According to BenchLM.ai, Gemini 3 Pro leads the MMMU benchmark with a score of 87.2%, followed by GPT-5.2 (86.7%) and Qwen3.6 Plus (86.0%). The top models are clustered within 1.2 points, suggesting this benchmark is nearing saturation for frontier models.

6 models have been evaluated on MMMU. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MMMU is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About MMMU

Year

2024

Tasks

Multimodal academic reasoning

Format

Image + text question answering

Difficulty

Frontier multimodal

MMMU is the base benchmark family behind later MMMU-Pro variants. It measures whether a model can answer expert-style questions that require combining visual understanding with domain knowledge and reasoning.

MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI

BenchLM freshness & provenance

Version

MMMU 2024

Refresh cadence

Annual

Staleness state

Refreshing

Question availability

Public benchmark set

RefreshingDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Leaderboard (6 models)

#1Gemini 3 Pro
87.2%
#2GPT-5.2
86.7%
#3Qwen3.6 Plus
86.0%
#4Qwen3.5 397B
85.0%
#5Kimi K2.5
84.3%
#6Claude Opus 4.5
80.7%

FAQ

What does MMMU measure?

A broad multimodal reasoning benchmark spanning charts, diagrams, tables, and academic visual question answering.

Which model scores highest on MMMU?

Gemini 3 Pro by Google currently leads with a score of 87.2% on MMMU.

How many models are evaluated on MMMU?

6 AI models have been evaluated on MMMU on BenchLM.

Last updated: April 2, 2026 · BenchLM version MMMU 2024

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.