An agent benchmark from Kilo focused on tasks relevant to OpenClaw-style workflows.
As of March 2026, Claude Opus 4.6 leads the PinchBench leaderboard with 93.3% , followed by Trinity-Large-Thinking (91.9%) and MiniMax M2.7 (89.8%).
Claude Opus 4.6
Anthropic
Trinity-Large-Thinking
Arcee AI
MiniMax M2.7
MiniMax
According to BenchLM.ai, Claude Opus 4.6 leads the PinchBench benchmark with a score of 93.3%, followed by Trinity-Large-Thinking (91.9%) and MiniMax M2.7 (89.8%). The scores show moderate spread, with meaningful differences between the top tier and mid-tier models.
5 models have been evaluated on PinchBench. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. PinchBench is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
OpenClaw-style agent tasks
Format
Agent capability benchmark
Difficulty
Long-horizon agent workflows
BenchLM tracks PinchBench as a display-only reference when providers publish exact first-party comparison values.
Trinity-Large-Thinking: Scaling an Open Source Frontier AgentVersion
PinchBench 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
An agent benchmark from Kilo focused on tasks relevant to OpenClaw-style workflows.
Claude Opus 4.6 by Anthropic currently leads with a score of 93.3% on PinchBench.
5 AI models have been evaluated on PinchBench on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.