A broad multimodal reasoning benchmark spanning charts, diagrams, tables, and academic visual question answering.
BenchLM mirrors the published score view for MMMU. Qwen3.6 Plus leads the public snapshot at 86.0% , followed by Qwen3.5-122B-A10B (83.9%) and Qwen3.5-27B (82.3%). BenchLM does not use these results to rank models overall.
Qwen3.6 Plus
Alibaba
Qwen3.5-122B-A10B
Alibaba
Qwen3.5-27B
Alibaba
The published MMMU snapshot is tightly clustered at the top: Qwen3.6 Plus sits at 86.0%, while the third row is only 3.7 points behind. The broader top-10 spread is 53.3 points, so the benchmark still separates strong models even when the leaders cluster.
5 models have been evaluated on MMMU. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MMMU is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2024
Tasks
Multimodal academic reasoning
Format
Image + text question answering
Difficulty
Frontier multimodal
MMMU is the base benchmark family behind later MMMU-Pro variants. It measures whether a model can answer expert-style questions that require combining visual understanding with domain knowledge and reasoning.
Version
MMMU 2024
Refresh cadence
Annual
Staleness state
Refreshing
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A broad multimodal reasoning benchmark spanning charts, diagrams, tables, and academic visual question answering.
Qwen3.6 Plus by Alibaba currently leads with a score of 86.0% on MMMU.
5 AI models have been evaluated on MMMU on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.