A challenging mathematical reasoning benchmark reported in DeepSeek-V4 model evaluations.
BenchLM mirrors the published score view for IMOAnswerBench. DeepSeek V4 Pro (Max) leads the public snapshot at 89.8% , followed by DeepSeek V4 Flash (Max) (88.4%) and DeepSeek V4 Pro (High) (88.0%). BenchLM does not use these results to rank models overall.
DeepSeek V4 Pro (Max)
DeepSeek
DeepSeek V4 Flash (Max)
DeepSeek
DeepSeek V4 Pro (High)
DeepSeek
The published IMOAnswerBench snapshot is tightly clustered at the top: DeepSeek V4 Pro (Max) sits at 89.8%, while the third row is only 1.8 points behind. The broader top-10 spread is 54.5 points, so the benchmark still separates strong models even when the leaders cluster.
6 models have been evaluated on IMOAnswerBench. The benchmark falls in the Math category. This category carries a 5% weight in BenchLM.ai's overall scoring system. IMOAnswerBench is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Advanced mathematical answer generation
Format
Pass@1 math benchmark
Difficulty
Olympiad-level mathematics
BenchLM stores IMOAnswerBench as a display-only provider-table row when exact values are published for frontier math comparisons.
Version
IMOAnswerBench 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A challenging mathematical reasoning benchmark reported in DeepSeek-V4 model evaluations.
DeepSeek V4 Pro (Max) by DeepSeek currently leads with a score of 89.8% on IMOAnswerBench.
6 AI models have been evaluated on IMOAnswerBench on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.