A November 2025 HMMT slice for high-end mathematical reasoning comparisons.
BenchLM mirrors the published score view for HMMT Nov 2025. GLM-5 leads the public snapshot at 96.9% , followed by Qwen3.6 Plus (94.6%) and GLM-5.1 (94.0%). BenchLM does not use these results to rank models overall.
GLM-5
Z.AI
Qwen3.6 Plus
Alibaba
GLM-5.1
Z.AI
The published HMMT Nov 2025 snapshot is tightly clustered at the top: GLM-5 sits at 96.9%, while the third row is only 2.9 points behind. The broader top-10 spread is 5.8 points, so many of the published scores sit in a relatively narrow band.
6 models have been evaluated on HMMT Nov 2025. The benchmark falls in the Math category. This category carries a 5% weight in BenchLM.ai's overall scoring system. HMMT Nov 2025 is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2025
Tasks
Competition math problems
Format
Contest mathematics
Difficulty
Olympiad-style mathematics
This row preserves exact provider-table values from the late-2025 HMMT contest cycle. It is useful for spotting whether frontier models generalize across separate contest sets rather than a single annual rollup.
Version
HMMT Nov 2025 2025
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A November 2025 HMMT slice for high-end mathematical reasoning comparisons.
GLM-5 by Z.AI currently leads with a score of 96.9% on HMMT Nov 2025.
6 AI models have been evaluated on HMMT Nov 2025 on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.