Tool-augmented MMMU-Pro variant that allows Python assistance during multimodal reasoning.
BenchLM mirrors the published score view for MMMU-Pro w/ Python. GPT-5.5 leads the public snapshot at 83.2% , followed by GPT-5.4 (82.1%) and Kimi K2.6 (80.1%). BenchLM does not use these results to rank models overall.
GPT-5.5
OpenAI
GPT-5.4
OpenAI
Kimi K2.6
Moonshot AI
The published MMMU-Pro w/ Python snapshot is tightly clustered at the top: GPT-5.5 sits at 83.2%, while the third row is only 3.1 points behind. The broader top-10 spread is 13.7 points, so the benchmark still separates strong models even when the leaders cluster.
5 models have been evaluated on MMMU-Pro w/ Python. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MMMU-Pro w/ Python is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Multimodal academic reasoning
Format
Image + text question answering with Python
Difficulty
Frontier multimodal
Useful for measuring multimodal reasoning when the model can combine visual understanding with computation.
Version
MMMU-Pro w/ Python 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
Tool-augmented MMMU-Pro variant that allows Python assistance during multimodal reasoning.
GPT-5.5 by OpenAI currently leads with a score of 83.2% on MMMU-Pro w/ Python.
5 AI models have been evaluated on MMMU-Pro w/ Python on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.