Skip to main content

SWE-bench Verified (mini-swe-agent-v2) (SWE-bench Verified*)

A display-only SWE-bench Verified reference from Arcee AI's Trinity-Large-Thinking comparison chart.

Benchmark score on SWE-bench Verified* — April 16, 2026

BenchLM mirrors the published score view for SWE-bench Verified*. Claude Opus 4.6 leads the public snapshot at 75.6% , followed by MiniMax M2.7 (75.4%) and GLM-5 (72.8%). BenchLM does not use these results to rank models overall.

5 modelsCodingCurrentDisplay onlyUpdated April 16, 2026

The published SWE-bench Verified* snapshot is tightly clustered at the top: Claude Opus 4.6 sits at 75.6%, while the third row is only 2.8 points behind. The broader top-10 spread is 12.4 points, so the benchmark still separates strong models even when the leaders cluster.

5 models have been evaluated on SWE-bench Verified*. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. SWE-bench Verified* is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About SWE-bench Verified*

Year

2026

Tasks

Repository task completion

Format

Agent scaffold benchmark

Difficulty

Professional software engineering

BenchLM stores this chart-specific SWE-bench Verified row separately because Arcee notes all models were evaluated in mini-swe-agent-v2.

BenchLM freshness & provenance

Version

SWE-bench Verified* 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (5 models)

1
75.6%
2
75.4%
3
72.8%
4
70.8%
5
63.2%

FAQ

What does SWE-bench Verified* measure?

A display-only SWE-bench Verified reference from Arcee AI's Trinity-Large-Thinking comparison chart.

Which model scores highest on SWE-bench Verified*?

Claude Opus 4.6 by Anthropic currently leads with a score of 75.6% on SWE-bench Verified*.

How many models are evaluated on SWE-bench Verified*?

5 AI models have been evaluated on SWE-bench Verified* on BenchLM.

Last updated: April 16, 2026 · BenchLM version SWE-bench Verified* 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.