Skip to main content

MLVU mean average (MLVU (M-Avg))

A multi-task video understanding benchmark averaged across MLVU categories.

How BenchLM shows MLVU (M-Avg) right now

BenchLM is tracking MLVU (M-Avg) in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.

These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.

6 tracked modelsLocal tracked rowsAwaiting exact-source attachmentsDisplay only

Tracked score on MLVU (M-Avg) — April 10, 2026

BenchLM mirrors the published tracked score view for MLVU (M-Avg). Qwen3.6 Plus leads the public snapshot at 86.7% , followed by Qwen3.5 397B (86.7%) and GPT-5.2 (85.6%). BenchLM does not use these results to rank models overall.

6 modelsMultimodal & GroundedCurrentDisplay onlyUpdated April 10, 2026

The published MLVU (M-Avg) snapshot is tightly clustered at the top: Qwen3.6 Plus sits at 86.7%, while the third row is only 1.1 points behind. The broader top-10 spread is 5.0 points, so many of the published scores sit in a relatively narrow band.

6 models have been evaluated on MLVU (M-Avg). The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MLVU (M-Avg) is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About MLVU (M-Avg)

Year

2026

Tasks

General video understanding

Format

Video QA and understanding

Difficulty

Broad multimodal video reasoning

MLVU captures general-purpose video understanding rather than a single narrow skill. BenchLM tracks the mean-average summary row so provider comparison tables can be compared directly.

BenchLM freshness & provenance

Version

MLVU (M-Avg) 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Tracked score table (6 models)

1
Qwen3.6 Plusqwen3-6-plus
86.7%
2
Qwen3.5 397Bqwen3-5-397b
86.7%
3
GPT-5.2gpt-5-2
85.6%
4
Kimi K2.5kimi-k2-5
85.0%
5
Gemini 3 Progemini-3-pro
83.0%
6
Claude Opus 4.5claude-opus-4-5
81.7%

FAQ

What does MLVU (M-Avg) measure?

A multi-task video understanding benchmark averaged across MLVU categories.

Which model leads the published MLVU (M-Avg) snapshot?

Qwen3.6 Plus currently leads the published MLVU (M-Avg) snapshot with a tracked score of 86.7%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on MLVU (M-Avg)?

6 AI models are included in BenchLM's mirrored MLVU (M-Avg) snapshot, based on the public leaderboard captured on April 10, 2026.

Last updated: April 10, 2026 · mirrored from the public benchmark leaderboard

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.