A benchmark for tool-calling over Model Context Protocol integrations and external tools.
BenchLM mirrors the published score view for MCP Atlas. GLM-5.1 leads the public snapshot at 71.8% , followed by GPT-5.4 (67.2%) and GPT-5.4 mini (57.7%). BenchLM does not use these results to rank models overall.
GLM-5.1
Z.AI
GPT-5.4
OpenAI
GPT-5.4 mini
OpenAI
The published MCP Atlas snapshot is tightly clustered at the top: GLM-5.1 sits at 71.8%, while the third row is only 14.1 points behind. The broader top-10 spread is 42.3 points, so the benchmark still separates strong models even when the leaders cluster.
9 models have been evaluated on MCP Atlas. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. MCP Atlas is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Tool-integrated agent tasks
Format
Interactive tool-calling evaluation
Difficulty
Advanced tool use
OpenAI reports MCP Atlas as a tool-use benchmark that measures how well models work with MCP-backed systems and external tools.
Version
MCP Atlas 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A benchmark for tool-calling over Model Context Protocol integrations and external tools.
GLM-5.1 by Z.AI currently leads with a score of 71.8% on MCP Atlas.
9 AI models have been evaluated on MCP Atlas on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.