Skip to main content

SWE-bench Pro

A stronger coding-agent benchmark than SWE-bench Verified, intended to differentiate frontier models on realistic software engineering work.

Top models on SWE-bench Pro — April 29, 2026

As of April 29, 2026, Claude Mythos Preview leads the SWE-bench Pro leaderboard with 77.8% , followed by Claude Opus 4.7 (Adaptive) (64.3%) and GPT-5.5 (58.6%).

30 modelsCoding23% of category scoreCurrentUpdated April 29, 2026

According to BenchLM.ai, Claude Mythos Preview leads the SWE-bench Pro benchmark with a score of 77.8%, followed by Claude Opus 4.7 (Adaptive) (64.3%) and GPT-5.5 (58.6%). There is significant spread across the leaderboard, making this benchmark effective at differentiating model capabilities.

30 models have been evaluated on SWE-bench Pro. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. Within that category, SWE-bench Pro contributes 23% of the category score, so strong performance here directly affects a model's overall ranking.

About SWE-bench Pro

Year

2026

Tasks

Real-world software engineering

Format

Repository task completion

Difficulty

Frontier coding agent

SWE-bench Pro is the more relevant frontier signal when selecting coding agents in 2026. It reflects more realistic difficulty than the older verified subset.

BenchLM freshness & provenance

Version

SWE-bench Pro 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

Current

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Leaderboard (30 models)

1
77.8%
2
64.3%
3
58.6%
4
58.6%
5
58.4%
6
57.7%
7
57.3%
8
57.2%
9
57.1%
10
56.8%
11
56.6%
12
56.2%
13
56.1%
14
55.6%
15
55.4%
16
55.1%
17
54.4%
18
53.5%
19
53.4%
20
52.6%
21
52.4%
22
52.3%
23
52.1%
24
51.8%
25
50.9%
26
50.7%
27
49.5%
28
49.1%
29
46.9%
30
44.5%

FAQ

What does SWE-bench Pro measure?

A stronger coding-agent benchmark than SWE-bench Verified, intended to differentiate frontier models on realistic software engineering work.

Which model scores highest on SWE-bench Pro?

Claude Mythos Preview by Anthropic currently leads with a score of 77.8% on SWE-bench Pro.

How many models are evaluated on SWE-bench Pro?

30 AI models have been evaluated on SWE-bench Pro on BenchLM.

Last updated: April 29, 2026 · BenchLM version SWE-bench Pro 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.