A display-only SWE-bench Verified reference from Arcee AI's Trinity-Large-Thinking comparison chart.
As of March 2026, Claude Opus 4.6 leads the SWE-bench Verified* leaderboard with 75.6% , followed by MiniMax M2.7 (75.4%) and GLM-5 (72.8%).
Claude Opus 4.6
Anthropic
MiniMax M2.7
MiniMax
GLM-5
Zhipu AI
According to BenchLM.ai, Claude Opus 4.6 leads the SWE-bench Verified* benchmark with a score of 75.6%, followed by MiniMax M2.7 (75.4%) and GLM-5 (72.8%). The top models are clustered within 2.8 points, suggesting this benchmark is nearing saturation for frontier models.
5 models have been evaluated on SWE-bench Verified*. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. SWE-bench Verified* is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Repository task completion
Format
Agent scaffold benchmark
Difficulty
Professional software engineering
BenchLM stores this chart-specific SWE-bench Verified row separately because Arcee notes all models were evaluated in mini-swe-agent-v2.
Trinity-Large-Thinking: Scaling an Open Source Frontier AgentVersion
SWE-bench Verified* 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A display-only SWE-bench Verified reference from Arcee AI's Trinity-Large-Thinking comparison chart.
Claude Opus 4.6 by Anthropic currently leads with a score of 75.6% on SWE-bench Verified*.
5 AI models have been evaluated on SWE-bench Verified* on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.