Skip to main content

Best Tool Use & Function Calling Models in 2026

This reporting page focuses on structured output, tool routing, function calling, and MCP-style task completion. It is narrower than the general agentic leaderboard and is better aligned to developers choosing models for tool-heavy applications.

This page ranks models using only sourced tool-use benchmarks in the reporting family.

Bottom line: Tool-use quality determines whether your AI agent can actually call APIs, use MCP servers, and route structured requests. Not all "agentic" models are equally good at it.

According to BenchLM.ai, Qwen3.6 Plus leads this ranking with a score of 56.9, followed by Qwen3.5 397B (54) and Claude Opus 4.5 (53.7). There is meaningful separation between the top models, suggesting genuine performance differences.

The best open-weight option is Qwen3.5 397B (ranked #2 with a score of 54). Open-weight models are highly competitive in this category — self-hosting is a viable alternative to proprietary APIs.

This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.

How to choose

Full Rankings (6 models)

Qwen3.6 Plus
Alibaba·Proprietary·1M

56.9

sourced avg

Qwen3.5 397B
Alibaba·Open Weight·128K

54

sourced avg

Claude Opus 4.5
Anthropic·Proprietary·200K

53.7

sourced avg

4
Qwen3.6-35B-A3B
Alibaba·Open Weight·262K

52.3

sourced avg

5
GLM-5
Z.AI·Open Weight·200K

50.3

sourced avg

6
Kimi K2.5
Moonshot AI·Open Weight·128K

47.1

sourced avg

These rankings update weekly

Get notified when models move. One email a week with what changed and why.

Free. No spam. Unsubscribe anytime.

Key Takeaways

The top model on this sourced reporting-family slice is Qwen3.6 Plus by Alibaba with an average of 56.9.

The best open-weight model is Qwen3.5 397B at position #2.

6 models are listed with sourced benchmark coverage in this reporting family.

Score in Context

What these scores mean

This ranking averages sourced tool-use benchmarks. It is narrower than the agentic category and focuses specifically on function calling and structured tool execution.

Known limitations

Tool-use benchmarks test specific function-calling patterns. Real-world tool use also depends on prompt engineering, retry logic, and error handling that benchmarks cannot capture.

Last updated: April 16, 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.