Claw-Eval (Claw-Eval)

An end-to-end real-world agent benchmark for OpenClaw-style workflows spanning tool use, planning, execution, and recovery across practical tasks.

Top Models on Claw-Eval — March 2026

As of March 2026, Claude Opus 4.6 leads the Claw-Eval leaderboard with 66.3% , followed by GPT-5.4 (66.3%) and Claude Sonnet 4.6 (66.3%).

21 modelsAgenticCurrentDisplay onlyUpdated April 2, 2026

According to BenchLM.ai, Claude Opus 4.6 leads the Claw-Eval benchmark with a score of 66.3%, followed by GPT-5.4 (66.3%) and Claude Sonnet 4.6 (66.3%). The top models are clustered within 0.0 points, suggesting this benchmark is nearing saturation for frontier models.

21 models have been evaluated on Claw-Eval. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. Claw-Eval is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About Claw-Eval

Year

2026

Tasks

Real-world agent workflows

Format

End-to-end agent evaluation

Difficulty

Broad real-world agentic execution

Claw-Eval is designed to test whether models can actually complete broad agent workflows instead of only local tool calls. It is useful for comparing agent reliability on realistic multi-step tasks with branching execution paths.

Claw-Eval

BenchLM freshness & provenance

Version

Claw-Eval 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Leaderboard (21 models)

#1Claude Opus 4.6
66.3%
#2GPT-5.4
66.3%
#3Claude Sonnet 4.6
66.3%
#4MiMo-V2-Pro
61.5%
#5Claude Opus 4.5
59.6%
#6Qwen3.6 Plus
58.7%
#7GLM-5
57.7%
#8MiMo-V2-Omni
56.7%
#9Grok 4.1 Fast
53.8%
#10GLM-5-Turbo
53.8%
#11Kimi K2.5
52.9%
#12MiniMax M2.7
51.9%
#13DeepSeek V3.2
51.0%
#14Gemini 3.1 Pro
50.0%
#15Qwen3.5 397B
48.1%
#16MiMo-V2-Flash
48.1%
#17Qwen3.5-122B-A10B
47.1%
#18Gemini 3 Flash
47.1%
#19GLM-4.5-Air
42.3%
#20Gemini 2.5 Flash
27.9%
#21Qwen3.5-27B
20.2%

FAQ

What does Claw-Eval measure?

An end-to-end real-world agent benchmark for OpenClaw-style workflows spanning tool use, planning, execution, and recovery across practical tasks.

Which model scores highest on Claw-Eval?

Claude Opus 4.6 by Anthropic currently leads with a score of 66.3% on Claw-Eval.

How many models are evaluated on Claw-Eval?

21 AI models have been evaluated on Claw-Eval on BenchLM.

Last updated: April 2, 2026 · BenchLM version Claw-Eval 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.