A display-only SWE-bench Verified reference from Arcee AI's Trinity-Large-Thinking comparison chart.
BenchLM mirrors the published score view for SWE-bench Verified*. Claude Opus 4.6 leads the public snapshot at 75.6% , followed by MiniMax M2.7 (75.4%) and GLM-5 (72.8%). BenchLM does not use these results to rank models overall.
Claude Opus 4.6
Anthropic
MiniMax M2.7
MiniMax
GLM-5
Z.AI
The published SWE-bench Verified* snapshot is tightly clustered at the top: Claude Opus 4.6 sits at 75.6%, while the third row is only 2.8 points behind. The broader top-10 spread is 12.4 points, so the benchmark still separates strong models even when the leaders cluster.
5 models have been evaluated on SWE-bench Verified*. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. SWE-bench Verified* is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Repository task completion
Format
Agent scaffold benchmark
Difficulty
Professional software engineering
BenchLM stores this chart-specific SWE-bench Verified row separately because Arcee notes all models were evaluated in mini-swe-agent-v2.
Version
SWE-bench Verified* 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A display-only SWE-bench Verified reference from Arcee AI's Trinity-Large-Thinking comparison chart.
Claude Opus 4.6 by Anthropic currently leads with a score of 75.6% on SWE-bench Verified*.
5 AI models have been evaluated on SWE-bench Verified* on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.