A 2026 American Invitational Mathematics Examination snapshot used in frontier-model comparison tables for mathematical reasoning.
As of March 2026, GLM-5 leads the AIME26 leaderboard with 95.8% , followed by Kimi K2.5 (95.8%) and Qwen3.6 Plus (95.3%).
GLM-5
Zhipu AI
Kimi K2.5
Moonshot AI
Qwen3.6 Plus
Alibaba
According to BenchLM.ai, GLM-5 leads the AIME26 benchmark with a score of 95.8%, followed by Kimi K2.5 (95.8%) and Qwen3.6 Plus (95.3%). The top models are clustered within 0.5 points, suggesting this benchmark is nearing saturation for frontier models.
5 models have been evaluated on AIME26. The benchmark falls in the Math category. This category carries a 5% weight in BenchLM.ai's overall scoring system. AIME26 is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Competition math problems
Format
Short-answer mathematics
Difficulty
Olympiad-style mathematics
AIME-style benchmarks remain one of the fastest ways to separate top reasoning models on olympiad-style math. AIME 2026 is a newer contest-year snapshot than the legacy AIME rows already tracked on BenchLM.
Qwen3.6 launch benchmarksVersion
AIME26 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A 2026 American Invitational Mathematics Examination snapshot used in frontier-model comparison tables for mathematical reasoning.
GLM-5 by Zhipu AI currently leads with a score of 95.8% on AIME26.
5 AI models have been evaluated on AIME26 on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.