Skip to main content

PinchBench

An OpenClaw agent benchmark from Kilo that measures successful task completion across standardized real-world agent workflows.

How BenchLM shows PinchBench

BenchLM mirrors the public PinchBench average-success-rate view using the official March 31, 2026 snapshot: 50 models, 625 runs, and 23 OpenClaw tasks. PinchBench grades runs with automated checks plus an LLM judge.

This benchmark is display only on BenchLM. It is excluded from BenchLM overall rankings, category rankings, and weighted scoring. The table below uses average scores only, matching the public PinchBench average view rather than the best-run view.

50 models625 runs23 tasksAverage scores onlyOfficial runs

Average success rate on PinchBench — March 31, 2026, 2:05 PM

BenchLM mirrors the published average success rate view for PinchBench. Trinity-Large-Thinking leads the public snapshot at 91.9% , followed by Qwen3.6 Plus (84.0%) and MiniMax M2.7 (83.2%). BenchLM does not use these results to rank models overall.

50 modelsAgenticCurrentDisplay onlyUpdated March 31, 2026, 2:05 PM

The published PinchBench snapshot is tightly clustered at the top: Trinity-Large-Thinking sits at 91.9%, while the third row is only 8.7 points behind. The broader top-10 spread is 11.1 points, so the benchmark still separates strong models even when the leaders cluster.

50 models have been evaluated on PinchBench. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. PinchBench is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About PinchBench

Year

2026

Tasks

23 OpenClaw agent tasks

Format

Average success rate from official runs

Difficulty

Long-horizon agent workflows

PinchBench publishes official OpenClaw runs across 23 tasks and grades results with automated checks plus an LLM judge. BenchLM mirrors the public average-score view as a display-only benchmark.

BenchLM freshness & provenance

Version

PinchBench 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Average success rate table (50 models)

1
Trinity-Large-Thinkingarcee-ai/trinity-large-thinking
91.9%
2
Qwen3.6 Plusqwen/qwen3.6-plus-preview
84.0%
3
MiniMax M2.7minimax/minimax-m2.7
83.2%
4
Claude Opus 4.6anthropic/claude-opus-4.6
83.1%
5
MiMo-V2-Omnixiaomi/mimo-v2-omni
81.9%
6
GPT-5.4openai/gpt-5.4
81.7%
7
GLM-5-Turboz-ai/glm-5-turbo
81.6%
8
Claude Sonnet 4.6anthropic/claude-sonnet-4.6
81.1%
9
Claude Sonnet 4.5anthropic/claude-sonnet-4.5
81.0%
10
GLM-5z-ai/glm-5
80.8%
11
Qwen3.5-122B-A10Bqwen/qwen3.5-122b-a10b
80.8%
12
MiMo-V2-Proxiaomi/mimo-v2-pro
80.7%
13
Claude Sonnet 4anthropic/claude-sonnet-4
80.5%
14
Qwen3.5 397Bqwen/qwen3.5-397b-a17b
80.4%
15
MiniMax M2.1minimax/minimax-m2.1
79.7%
16
Claude Opus 4.5anthropic/claude-opus-4.5
79.5%
17
MiniMax M2.5minimax/minimax-m2.5
79.4%
18
Qwen3.5 Plusqwen/qwen3.5-plus-02-15
79.1%
19
Kimi K2.5moonshotai/kimi-k2.5
79.1%
20
Qwen3 Coder Nextqwen/qwen3-coder-next
79.1%
21
Qwen3.5-27Bqwen/qwen3.5-27b
78.5%
22
Claude Haiku 4.5anthropic/claude-haiku-4.5
78.1%
23
GLM-4.5-Airz-ai/glm-4.5-air
77.7%
24
Healer Alphaopenrouter/healer-alpha
77.3%
25
Hunter Alphaopenrouter/hunter-alpha
77.3%
26
Gemini 3.1 Progoogle/gemini-3.1-pro-preview
77.0%
27
Step 3.5 Flashstepfun/step-3.5-flash
76.9%
28
Nemotron 3 Super 120B A12Bnvidia/nemotron-3-super-120b-a12b
75.5%
29
Devstral 2512mistralai/devstral-2512
75.2%
30
Gemini 3 Flashgoogle/gemini-3-flash-preview
74.6%
31
Nova 2 Lite v1amazon/nova-2-lite-v1
72.0%
32
Grok 4.1 Fastx-ai/grok-4.1-fast
71.8%
33
Qwen3 Max Thinkingqwen/qwen3-max-thinking
71.8%
34
Qwen3.5-35B-A3Bqwen/qwen3.5-35b-a3b
71.7%
35
MiMo-V2-Flashxiaomi/mimo-v2-flash
70.2%
36
Mercury 2inception/mercury-2
70.0%
37
GPT-5 miniopenai/gpt-5-mini
69.7%
38
Nemotron 3 Super 120B A12B: Freenvidia/nemotron-3-super-120b-a12b:free
69.6%
39
Trinity-Large-Previewarcee-ai/trinity-large-preview
69.4%
40
DeepSeek V3.2deepseek/deepseek-v3.2
68.6%
41
Gemini 3 Progoogle/gemini-3-pro-preview
67.7%
42
Mistral Large 2512mistralai/mistral-large-2512
66.0%
43
Gemini 2.5 Progoogle/gemini-2.5-pro
65.3%
44
Trinity-Large-Preview: Freearcee-ai/trinity-large-preview:free
65.1%
45
DeepSeek Chatdeepseek/deepseek-chat
63.9%
46
GPT-4o miniopenai/gpt-4o-mini
63.5%
47
Gemini 2.5 Flashgoogle/gemini-2.5-flash
58.0%
48
GPT-5 nanoopenai/gpt-5-nano
57.9%
49
GPT-4oopenai/gpt-4o
55.7%
50
GPT-OSS 120Bopenai/gpt-oss-120b
50.2%

FAQ

What does PinchBench measure?

An OpenClaw agent benchmark from Kilo that measures successful task completion across standardized real-world agent workflows.

Which model leads the published PinchBench snapshot?

Trinity-Large-Thinking currently leads the published PinchBench snapshot with a average success rate of 91.9%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on PinchBench?

50 AI models are included in BenchLM's mirrored PinchBench snapshot, based on the public leaderboard captured on March 31, 2026, 2:05 PM.

Last updated: March 31, 2026, 2:05 PM · mirrored from the public benchmark leaderboard

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.