A benchmark for tool-calling over Model Context Protocol integrations and external tools.
As of March 2026, GPT-5.4 leads the MCP Atlas leaderboard with 67.2% , followed by GPT-5.2 (60.6%) and GPT-5.4 mini (57.7%).
GPT-5.4
OpenAI
GPT-5.2
OpenAI
GPT-5.4 mini
OpenAI
According to BenchLM.ai, GPT-5.4 leads the MCP Atlas benchmark with a score of 67.2%, followed by GPT-5.2 (60.6%) and GPT-5.4 mini (57.7%). The scores show moderate spread, with meaningful differences between the top tier and mid-tier models.
5 models have been evaluated on MCP Atlas. The benchmark falls in the Agentic category, which carries a 22% weight in BenchLM.ai's overall scoring system. MCP Atlas is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Tool-integrated agent tasks
Format
Interactive tool-calling evaluation
Difficulty
Advanced tool use
OpenAI reports MCP Atlas as a tool-use benchmark that measures how well models work with MCP-backed systems and external tools.
Introducing GPT-5.4 mini and nanoA benchmark for tool-calling over Model Context Protocol integrations and external tools.
GPT-5.4 by OpenAI currently leads with a score of 67.2% on MCP Atlas.
5 AI models have been evaluated on MCP Atlas on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.