An OpenClaw agent benchmark from Kilo that measures successful task completion across standardized real-world agent workflows.
BenchLM mirrors the public PinchBench average-success-rate view using the official March 31, 2026 snapshot: 50 models, 625 runs, and 23 OpenClaw tasks. PinchBench grades runs with automated checks plus an LLM judge.
This benchmark is display only on BenchLM. It is excluded from BenchLM overall rankings, category rankings, and weighted scoring. The table below uses average scores only, matching the public PinchBench average view rather than the best-run view.
BenchLM mirrors the published average success rate view for PinchBench. Trinity-Large-Thinking leads the public snapshot at 91.9% , followed by Qwen3.6 Plus (84.0%) and MiniMax M2.7 (83.2%). BenchLM does not use these results to rank models overall.
Trinity-Large-Thinking
Arcee AI
arcee-ai/trinity-large-thinking
Qwen3.6 Plus
Alibaba
qwen/qwen3.6-plus-preview
MiniMax M2.7
MiniMax
minimax/minimax-m2.7
The published PinchBench snapshot is tightly clustered at the top: Trinity-Large-Thinking sits at 91.9%, while the third row is only 8.7 points behind. The broader top-10 spread is 11.1 points, so the benchmark still separates strong models even when the leaders cluster.
50 models have been evaluated on PinchBench. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. PinchBench is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
23 OpenClaw agent tasks
Format
Average success rate from official runs
Difficulty
Long-horizon agent workflows
PinchBench publishes official OpenClaw runs across 23 tasks and grades results with automated checks plus an LLM judge. BenchLM mirrors the public average-score view as a display-only benchmark.
Version
PinchBench 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
An OpenClaw agent benchmark from Kilo that measures successful task completion across standardized real-world agent workflows.
Trinity-Large-Thinking currently leads the published PinchBench snapshot with a average success rate of 91.9%. BenchLM shows this benchmark for display only and does not use it in overall rankings.
50 AI models are included in BenchLM's mirrored PinchBench snapshot, based on the public leaderboard captured on March 31, 2026, 2:05 PM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.