Skip to main content

MMAnswerBench

A multimodal mathematical reasoning benchmark that tests whether models can answer visually grounded math questions correctly.

Benchmark score on MMAnswerBench — April 10, 2026

BenchLM mirrors the published score view for MMAnswerBench. Claude Opus 4.5 leads the public snapshot at 84.0% , followed by GLM-5.1 (83.8%) and Qwen3.6 Plus (83.8%). BenchLM does not use these results to rank models overall.

6 modelsMathCurrentDisplay onlyUpdated April 10, 2026

The published MMAnswerBench snapshot is tightly clustered at the top: Claude Opus 4.5 sits at 84.0%, while the third row is only 0.2 points behind. The broader top-10 spread is 3.1 points, so many of the published scores sit in a relatively narrow band.

6 models have been evaluated on MMAnswerBench. The benchmark falls in the Math category. This category carries a 5% weight in BenchLM.ai's overall scoring system. MMAnswerBench is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About MMAnswerBench

Year

2026

Tasks

Multimodal math questions

Format

Visual and structured mathematical QA

Difficulty

Advanced mathematical reasoning

MMAnswerBench matters because text-only math ability does not guarantee strong performance when the relevant information is embedded in diagrams, tables, or other visual inputs. It acts as a multimodal math transfer check.

BenchLM freshness & provenance

Version

MMAnswerBench 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (6 models)

1
84.0%
2
83.8%
3
83.8%
4
82.5%
5
81.8%
6
80.9%

FAQ

What does MMAnswerBench measure?

A multimodal mathematical reasoning benchmark that tests whether models can answer visually grounded math questions correctly.

Which model scores highest on MMAnswerBench?

Claude Opus 4.5 by Anthropic currently leads with a score of 84.0% on MMAnswerBench.

How many models are evaluated on MMAnswerBench?

6 AI models have been evaluated on MMAnswerBench on BenchLM.

Last updated: April 10, 2026 · BenchLM version MMAnswerBench 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.