A long-document multimodal benchmark for grounded reasoning over extended document contexts.
BenchLM is tracking MMLongBench-Doc in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.
These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.
BenchLM mirrors the published tracked score view for MMLongBench-Doc. Qwen3.6 Plus leads the public snapshot at 62.0% , followed by Qwen3.5 397B (61.5%) and Gemini 3 Pro (60.5%). BenchLM does not use these results to rank models overall.
Qwen3.6 Plus
Alibaba
qwen3-6-plus
Qwen3.5 397B
Alibaba
qwen3-5-397b
Gemini 3 Pro
gemini-3-pro
The published MMLongBench-Doc snapshot is tightly clustered at the top: Qwen3.6 Plus sits at 62.0%, while the third row is only 1.5 points behind. The broader top-10 spread is 3.5 points, so many of the published scores sit in a relatively narrow band.
4 models have been evaluated on MMLongBench-Doc. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MMLongBench-Doc is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Long document understanding
Format
Document-grounded reasoning
Difficulty
Long-context document reasoning
MMLongBench-Doc is designed to test whether a model can maintain grounded understanding across large document contexts rather than only short OCR-style snippets.
Version
MMLongBench-Doc 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A long-document multimodal benchmark for grounded reasoning over extended document contexts.
Qwen3.6 Plus currently leads the published MMLongBench-Doc snapshot with a tracked score of 62.0%. BenchLM shows this benchmark for display only and does not use it in overall rankings.
4 AI models are included in BenchLM's mirrored MMLongBench-Doc snapshot, based on the public leaderboard captured on April 10, 2026.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.