Skip to main content

MMMU-Pro with Python (MMMU-Pro w/ Python)

Tool-augmented MMMU-Pro variant that allows Python assistance during multimodal reasoning.

Benchmark score on MMMU-Pro w/ Python — May 1, 2026

BenchLM mirrors the published score view for MMMU-Pro w/ Python. GPT-5.5 leads the public snapshot at 83.2% , followed by GPT-5.4 (82.1%) and Kimi K2.6 (80.1%). BenchLM does not use these results to rank models overall.

5 modelsMultimodal & GroundedCurrentDisplay onlyUpdated May 1, 2026

The published MMMU-Pro w/ Python snapshot is tightly clustered at the top: GPT-5.5 sits at 83.2%, while the third row is only 3.1 points behind. The broader top-10 spread is 13.7 points, so the benchmark still separates strong models even when the leaders cluster.

5 models have been evaluated on MMMU-Pro w/ Python. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MMMU-Pro w/ Python is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About MMMU-Pro w/ Python

Year

2026

Tasks

Multimodal academic reasoning

Format

Image + text question answering with Python

Difficulty

Frontier multimodal

Useful for measuring multimodal reasoning when the model can combine visual understanding with computation.

BenchLM freshness & provenance

Version

MMMU-Pro w/ Python 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (5 models)

1
83.2%
2
82.1%
3
80.1%
4
78%
5
69.5%

FAQ

What does MMMU-Pro w/ Python measure?

Tool-augmented MMMU-Pro variant that allows Python assistance during multimodal reasoning.

Which model scores highest on MMMU-Pro w/ Python?

GPT-5.5 by OpenAI currently leads with a score of 83.2% on MMMU-Pro w/ Python.

How many models are evaluated on MMMU-Pro w/ Python?

5 AI models have been evaluated on MMMU-Pro w/ Python on BenchLM.

Last updated: May 1, 2026 · BenchLM version MMMU-Pro w/ Python 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.