Tool-free variant of Humanity's Last Exam that isolates a model's raw frontier reasoning.
As of March 2026, GPT-5.4 leads the HLE w/o tools leaderboard with 39.8% , followed by GPT-5.4 mini (28.2%) and GPT-5.4 nano (24.3%).
GPT-5.4
OpenAI
GPT-5.4 mini
OpenAI
GPT-5.4 nano
OpenAI
According to BenchLM.ai, GPT-5.4 leads the HLE w/o tools benchmark with a score of 39.8%, followed by GPT-5.4 mini (28.2%) and GPT-5.4 nano (24.3%). There is significant spread across the leaderboard, making this benchmark effective at differentiating model capabilities.
4 models have been evaluated on HLE w/o tools. The benchmark falls in the Knowledge category, which carries a 12% weight in BenchLM.ai's overall scoring system. HLE w/o tools is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Expert-level questions
Format
Tool-free expert QA
Difficulty
Frontier expert level
This variant removes external tools so the score reflects pure model performance on frontier expert questions.
Introducing GPT-5.4 mini and nanoTool-free variant of Humanity's Last Exam that isolates a model's raw frontier reasoning.
GPT-5.4 by OpenAI currently leads with a score of 39.8% on HLE w/o tools.
4 AI models have been evaluated on HLE w/o tools on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.