A tool-use benchmark focused on selecting, sequencing, and completing tasks with external tools.
BenchLM mirrors the published score view for Toolathlon. GPT-5.4 leads the public snapshot at 54.6% , followed by MiniMax M2.7 (46.3%) and Claude Opus 4.5 (43.5%). BenchLM does not use these results to rank models overall.
GPT-5.4
OpenAI
MiniMax M2.7
MiniMax
Claude Opus 4.5
Anthropic
The published Toolathlon snapshot is tightly clustered at the top: GPT-5.4 sits at 54.6%, while the third row is only 11.1 points behind. The broader top-10 spread is 26.8 points, so the benchmark still separates strong models even when the leaders cluster.
9 models have been evaluated on Toolathlon. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. Toolathlon is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Multi-tool workflows
Format
Interactive tool-calling evaluation
Difficulty
Advanced tool use
Toolathlon is useful for judging whether a model can do more than answer in chat and instead complete multi-step tool workflows.
Version
Toolathlon 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A tool-use benchmark focused on selecting, sequencing, and completing tasks with external tools.
GPT-5.4 by OpenAI currently leads with a score of 54.6% on Toolathlon.
9 AI models have been evaluated on Toolathlon on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.