Humanity's Last Exam without tools (HLE w/o tools)

Tool-free variant of Humanity's Last Exam that isolates a model's raw frontier reasoning.

Top Models on HLE w/o tools — March 2026

As of March 2026, GPT-5.4 leads the HLE w/o tools leaderboard with 39.8% , followed by GPT-5.4 mini (28.2%) and GPT-5.4 nano (24.3%).

4 modelsKnowledgeUpdated March 17, 2026

According to BenchLM.ai, GPT-5.4 leads the HLE w/o tools benchmark with a score of 39.8%, followed by GPT-5.4 mini (28.2%) and GPT-5.4 nano (24.3%). There is significant spread across the leaderboard, making this benchmark effective at differentiating model capabilities.

4 models have been evaluated on HLE w/o tools. The benchmark falls in the Knowledge category, which carries a 12% weight in BenchLM.ai's overall scoring system. HLE w/o tools is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About HLE w/o tools

Year

2026

Tasks

Expert-level questions

Format

Tool-free expert QA

Difficulty

Frontier expert level

This variant removes external tools so the score reflects pure model performance on frontier expert questions.

Introducing GPT-5.4 mini and nano

Leaderboard (4 models)

#1GPT-5.4
39.8%
#2GPT-5.4 mini
28.2%
#3GPT-5.4 nano
24.3%
#4GPT-5 mini
18.3%

FAQ

What does HLE w/o tools measure?

Tool-free variant of Humanity's Last Exam that isolates a model's raw frontier reasoning.

Which model scores highest on HLE w/o tools?

GPT-5.4 by OpenAI currently leads with a score of 39.8% on HLE w/o tools.

How many models are evaluated on HLE w/o tools?

4 AI models have been evaluated on HLE w/o tools on BenchLM.

Last updated: March 17, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.