Tool-free variant of Humanity's Last Exam that isolates a model's raw frontier reasoning.
BenchLM mirrors the published score view for HLE w/o tools. Claude Mythos Preview leads the public snapshot at 56.8% , followed by Claude Opus 4.7 (Adaptive) (46.9%) and Gemini 3.1 Pro (45.4%). BenchLM does not use these results to rank models overall.
Claude Mythos Preview
Anthropic
Claude Opus 4.7 (Adaptive)
Anthropic
Gemini 3.1 Pro
The published HLE w/o tools snapshot is tightly clustered at the top: Claude Mythos Preview sits at 56.8%, while the third row is only 11.4 points behind. The broader top-10 spread is 22.8 points, so the benchmark still separates strong models even when the leaders cluster.
15 models have been evaluated on HLE w/o tools. The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. HLE w/o tools is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Expert-level questions
Format
Tool-free expert QA
Difficulty
Frontier expert level
This variant removes external tools so the score reflects pure model performance on frontier expert questions.
Version
HLE w/o tools 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
Tool-free variant of Humanity's Last Exam that isolates a model's raw frontier reasoning.
Claude Mythos Preview by Anthropic currently leads with a score of 56.8% on HLE w/o tools.
15 AI models have been evaluated on HLE w/o tools on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.