Skip to main content

Claw-Eval

An end-to-end real-world agent benchmark for OpenClaw-style workflows spanning tool use, planning, execution, and recovery across practical tasks.

Benchmark score on Claw-Eval — April 10, 2026

BenchLM mirrors the published score view for Claw-Eval. Claude Opus 4.5 leads the public snapshot at 59.6% , followed by Qwen3.6 Plus (58.7%) and GLM-5 (57.7%). BenchLM does not use these results to rank models overall.

5 modelsAgenticCurrentDisplay onlyUpdated April 10, 2026

The published Claw-Eval snapshot is tightly clustered at the top: Claude Opus 4.5 sits at 59.6%, while the third row is only 1.9 points behind. The broader top-10 spread is 11.5 points, so the benchmark still separates strong models even when the leaders cluster.

5 models have been evaluated on Claw-Eval. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. Claw-Eval is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About Claw-Eval

Year

2026

Tasks

Real-world agent workflows

Format

End-to-end agent evaluation

Difficulty

Broad real-world agentic execution

Claw-Eval is designed to test whether models can actually complete broad agent workflows instead of only local tool calls. It is useful for comparing agent reliability on realistic multi-step tasks with branching execution paths.

BenchLM freshness & provenance

Version

Claw-Eval 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (5 models)

1
59.6%
2
58.7%
3
57.7%
4
52.9%
5
48.1%

FAQ

What does Claw-Eval measure?

An end-to-end real-world agent benchmark for OpenClaw-style workflows spanning tool use, planning, execution, and recovery across practical tasks.

Which model scores highest on Claw-Eval?

Claude Opus 4.5 by Anthropic currently leads with a score of 59.6% on Claw-Eval.

How many models are evaluated on Claw-Eval?

5 AI models have been evaluated on Claw-Eval on BenchLM.

Last updated: April 10, 2026 · BenchLM version Claw-Eval 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.