A multimodal math benchmark for visually grounded mathematical reasoning and answer generation.
As of March 2026, Qwen3.6 Plus leads the We-Math leaderboard with 89.0% , followed by Qwen3.5 397B (87.9%) and Gemini 3 Pro (86.9%).
Qwen3.6 Plus
Alibaba
Qwen3.5 397B
Alibaba
Gemini 3 Pro
According to BenchLM.ai, Qwen3.6 Plus leads the We-Math benchmark with a score of 89.0%, followed by Qwen3.5 397B (87.9%) and Gemini 3 Pro (86.9%). The top models are clustered within 2.1 points, suggesting this benchmark is nearing saturation for frontier models.
6 models have been evaluated on We-Math. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. We-Math is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Visually grounded math problems
Format
Multimodal mathematical reasoning
Difficulty
Advanced multimodal mathematics
We-Math is useful as a visual-math stress test because it combines symbolic reasoning with figure understanding. It helps reveal whether a model's math strength transfers into multimodal settings.
Qwen3.6 launch benchmarksVersion
We-Math 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A multimodal math benchmark for visually grounded mathematical reasoning and answer generation.
Qwen3.6 Plus by Alibaba currently leads with a score of 89.0% on We-Math.
6 AI models have been evaluated on We-Math on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.