Tool-augmented Humanity's Last Exam scores reported in DeepSeek-V4 thinking-mode evaluations.
BenchLM mirrors the published score view for HLE w/ tools. DeepSeek V4 Pro (Max) leads the public snapshot at 48.2% , followed by DeepSeek V4 Flash (Max) (45.1%) and DeepSeek V4 Pro (High) (44.7%). BenchLM does not use these results to rank models overall.
DeepSeek V4 Pro (Max)
DeepSeek
DeepSeek V4 Flash (Max)
DeepSeek
DeepSeek V4 Pro (High)
DeepSeek
The published HLE w/ tools snapshot is tightly clustered at the top: DeepSeek V4 Pro (Max) sits at 48.2%, while the third row is only 3.5 points behind. The broader top-10 spread is 7.9 points, so many of the published scores sit in a relatively narrow band.
4 models have been evaluated on HLE w/ tools. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. HLE w/ tools is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Expert questions with tool use
Format
Pass@1
Difficulty
Frontier tool-augmented reasoning
BenchLM stores HLE w/ tools as a display-only provider-table row when exact values are published in DeepSeek-V4 evaluations.
Version
HLE w/ tools 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
Tool-augmented Humanity's Last Exam scores reported in DeepSeek-V4 thinking-mode evaluations.
DeepSeek V4 Pro (Max) by DeepSeek currently leads with a score of 48.2% on HLE w/ tools.
4 AI models have been evaluated on HLE w/ tools on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.