A 2026 American Invitational Mathematics Examination snapshot used in frontier-model comparison tables for mathematical reasoning.
BenchLM mirrors the published score view for AIME26. GLM-5 leads the public snapshot at 95.8% , followed by Kimi K2.5 (95.8%) and GLM-5.1 (95.3%). BenchLM does not use these results to rank models overall.
GLM-5
Z.AI
Kimi K2.5
Moonshot AI
GLM-5.1
Z.AI
The published AIME26 snapshot is tightly clustered at the top: GLM-5 sits at 95.8%, while the third row is only 0.5 points behind. The broader top-10 spread is 3.1 points, so many of the published scores sit in a relatively narrow band.
7 models have been evaluated on AIME26. The benchmark falls in the Math category. This category carries a 5% weight in BenchLM.ai's overall scoring system. AIME26 is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Competition math problems
Format
Short-answer mathematics
Difficulty
Olympiad-style mathematics
AIME-style benchmarks remain one of the fastest ways to separate top reasoning models on olympiad-style math. AIME 2026 is a newer contest-year snapshot than the legacy AIME rows already tracked on BenchLM.
Version
AIME26 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A 2026 American Invitational Mathematics Examination snapshot used in frontier-model comparison tables for mathematical reasoning.
GLM-5 by Z.AI currently leads with a score of 95.8% on AIME26.
7 AI models have been evaluated on AIME26 on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.