Tool-Agent-User Benchmark (TAU-bench)

TAU-bench evaluates AI agents in realistic enterprise scenarios requiring multi-turn tool use, database interactions, and policy adherence. It tests across retail and airline domains, measuring an agent's ability to reliably complete customer service tasks while following complex business rules.

How BenchLM shows TAU-bench right now

BenchLM is tracking TAU-bench in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.

These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.

38 tracked modelsLocal tracked rowsAwaiting exact-source attachmentsDisplay only

Tracked score on TAU-bench — April 7, 2026

BenchLM mirrors the published tracked score view for TAU-bench. Claude Mythos Preview leads the public snapshot at 89.2% , followed by Claude Sonnet 4.6 (87.5%) and Claude Sonnet 4.5 (86.2%). BenchLM does not use these results to rank models overall.

38 modelsAgentic10% of category scoreRefreshingUpdated April 7, 2026

The published TAU-bench snapshot is tightly clustered at the top: Claude Mythos Preview sits at 89.2%, while the third row is only 3.0 points behind. The broader top-10 spread is 9.5 points, so many of the published scores sit in a relatively narrow band.

38 models have been evaluated on TAU-bench. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. Within that category, TAU-bench contributes 10% of the category score, so strong performance here directly affects a model's overall ranking.

About TAU-bench

Year

2024

Tasks

680

BenchLM freshness & provenance

Version

TAU-bench 2024

Refresh cadence

Annual

Staleness state

Refreshing

Question availability

Public benchmark set

Refreshing

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Tracked score table (38 models)

#1
Claude Mythos Previewclaude-mythos-preview
89.2%
#2
Claude Sonnet 4.6claude-sonnet-4-6
87.5%
#3
Claude Sonnet 4.5claude-sonnet-4-5
86.2%
#4
Claude Opus 4.6claude-opus-4-6
84.8%
#5
GLM-5 (Reasoning)glm-5-reasoning
83.4%
#6
Claude 4.1 Opusclaude-4-1-opus
82.4%
#7
GLM-5glm-5
82.1%
#8
Grok 4.20 Multi-agentgrok-4-20-multi-agent-beta
80.5%
#9
GPT-5.4 Progpt-5-4-pro
80.1%
#10
GLM-4.5glm-4-5
79.7%
#11
Grok 4.20grok-4-20-beta
78.9%
#12
GPT-5.4gpt-5-4
78.3%
#13
Qwen3.5 397B (Reasoning)qwen3-5-397b-reasoning
78.2%
#14
GPT-5.3 Codexgpt-5-3-codex
77.8%
#15
Qwen3.5 397Bqwen3-5-397b
77.5%
#16
Qwen3.6 Plusqwen3-6-plus
76.8%
#17
Gemini 3.1 Progemini-3-1-pro
76.5%
#18
Step 3.5 Flashstep-3-5-flash
76.2%
#19
Gemini 3 Progemini-3-pro
75.3%
#20
GPT-5.2gpt-5-2
75.1%
#21
Llama 4 Behemothllama-4-behemoth
74.8%
#22
Grok 4.1grok-4-1
74.6%
#23
GPT-5.1gpt-5-1
74.2%
#24
Kimi K2.5kimi-k2-5
74.2%
#25
DeepSeek V3.2 (Thinking)deepseek-v3-2-thinking
73.5%
#26
GPT-5.3 Instantgpt-5-3-instant
73.4%
#27
Grok 4.1 Fastgrok-4-1-fast
72.8%
#28
MiniMax M2.7minimax-m2-7
72.8%
#29
Nemotron 3 Ultra 500Bnemotron-3-ultra-500b
72.1%
#30
o4-mini (high)o4-mini-high
71.8%
#31
Gemini 3 Flashgemini-3-flash
71.5%
#32
DeepSeek V3.2deepseek-v3-2
71.3%
#33
Mistral Large 3mistral-large-3
70.2%
#34
MiMo-V2-Promimo-v2-pro
69.2%
#35
Llama 4 Maverickllama-4-maverick
68.5%
#36
Qwen3.5 Flashqwen3-5-flash
66.4%
#37
Mistral Small 4mistral-small-4
65.8%
#38
Llama 4 Scoutllama-4-scout
62.3%

FAQ

What does TAU-bench measure?

TAU-bench evaluates AI agents in realistic enterprise scenarios requiring multi-turn tool use, database interactions, and policy adherence. It tests across retail and airline domains, measuring an agent's ability to reliably complete customer service tasks while following complex business rules.

Which model leads the published TAU-bench snapshot?

Claude Mythos Preview currently leads the published TAU-bench snapshot with a tracked score of 89.2%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on TAU-bench?

38 AI models are included in BenchLM's mirrored TAU-bench snapshot, based on the public leaderboard captured on April 7, 2026.

Last updated: April 7, 2026 · mirrored from the public benchmark leaderboard

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.