Skip to main content

We-Math

A multimodal math benchmark for visually grounded mathematical reasoning and answer generation.

How BenchLM shows We-Math right now

BenchLM is tracking We-Math in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.

These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.

6 tracked modelsLocal tracked rowsAwaiting exact-source attachmentsDisplay only

Tracked score on We-Math — April 10, 2026

BenchLM mirrors the published tracked score view for We-Math. Qwen3.6 Plus leads the public snapshot at 89.0% , followed by Qwen3.5 397B (87.9%) and Gemini 3 Pro (86.9%). BenchLM does not use these results to rank models overall.

6 modelsMultimodal & GroundedCurrentDisplay onlyUpdated April 10, 2026

The published We-Math snapshot is tightly clustered at the top: Qwen3.6 Plus sits at 89.0%, while the third row is only 2.1 points behind. The broader top-10 spread is 19.0 points, so the benchmark still separates strong models even when the leaders cluster.

6 models have been evaluated on We-Math. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. We-Math is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About We-Math

Year

2026

Tasks

Visually grounded math problems

Format

Multimodal mathematical reasoning

Difficulty

Advanced multimodal mathematics

We-Math is useful as a visual-math stress test because it combines symbolic reasoning with figure understanding. It helps reveal whether a model's math strength transfers into multimodal settings.

BenchLM freshness & provenance

Version

We-Math 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Tracked score table (6 models)

1
Qwen3.6 Plusqwen3-6-plus
89.0%
2
Qwen3.5 397Bqwen3-5-397b
87.9%
3
Gemini 3 Progemini-3-pro
86.9%
4
Kimi K2.5kimi-k2-5
84.7%
5
GPT-5.2gpt-5-2
79.0%
6
Claude Opus 4.5claude-opus-4-5
70.0%

FAQ

What does We-Math measure?

A multimodal math benchmark for visually grounded mathematical reasoning and answer generation.

Which model leads the published We-Math snapshot?

Qwen3.6 Plus currently leads the published We-Math snapshot with a tracked score of 89.0%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on We-Math?

6 AI models are included in BenchLM's mirrored We-Math snapshot, based on the public leaderboard captured on April 10, 2026.

Last updated: April 10, 2026 · mirrored from the public benchmark leaderboard

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.