A video extension of MMMU-style multimodal reasoning over expert questions grounded in temporal media.
BenchLM mirrors the published score view for VideoMMMU. Gemini 3 Pro leads the public snapshot at 87.6% , followed by Kimi K2.5 (86.6%) and Qwen3.5 397B (84.7%). BenchLM does not use these results to rank models overall.
Gemini 3 Pro
Kimi K2.5
Moonshot AI
Qwen3.5 397B
Alibaba
The published VideoMMMU snapshot is tightly clustered at the top: Gemini 3 Pro sits at 87.6%, while the third row is only 2.9 points behind. The broader top-10 spread is 3.6 points, so many of the published scores sit in a relatively narrow band.
5 models have been evaluated on VideoMMMU. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. VideoMMMU is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Video-grounded expert reasoning
Format
Video + text reasoning
Difficulty
Frontier multimodal video reasoning
VideoMMMU tests whether multimodal reasoning skills extend from static images into temporal video understanding. It is useful for evaluating long-form visual reasoning rather than static scene recognition.
Version
VideoMMMU 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A video extension of MMMU-style multimodal reasoning over expert questions grounded in temporal media.
Gemini 3 Pro by Google currently leads with a score of 87.6% on VideoMMMU.
5 AI models have been evaluated on VideoMMMU on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.