A broad multimodal reasoning benchmark spanning charts, diagrams, tables, and academic visual question answering.
As of March 2026, Gemini 3 Pro leads the MMMU leaderboard with 87.2% , followed by GPT-5.2 (86.7%) and Qwen3.6 Plus (86.0%).
Gemini 3 Pro
GPT-5.2
OpenAI
Qwen3.6 Plus
Alibaba
According to BenchLM.ai, Gemini 3 Pro leads the MMMU benchmark with a score of 87.2%, followed by GPT-5.2 (86.7%) and Qwen3.6 Plus (86.0%). The top models are clustered within 1.2 points, suggesting this benchmark is nearing saturation for frontier models.
6 models have been evaluated on MMMU. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MMMU is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2024
Tasks
Multimodal academic reasoning
Format
Image + text question answering
Difficulty
Frontier multimodal
MMMU is the base benchmark family behind later MMMU-Pro variants. It measures whether a model can answer expert-style questions that require combining visual understanding with domain knowledge and reasoning.
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGIVersion
MMMU 2024
Refresh cadence
Annual
Staleness state
Refreshing
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A broad multimodal reasoning benchmark spanning charts, diagrams, tables, and academic visual question answering.
Gemini 3 Pro by Google currently leads with a score of 87.2% on MMMU.
6 AI models have been evaluated on MMMU on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.