Skip to main content

SWE-bench Pro

A stronger coding-agent benchmark than SWE-bench Verified, intended to differentiate frontier models on realistic software engineering work.

Top models on SWE-bench Pro — April 10, 2026

As of April 10, 2026, Claude Mythos Preview leads the SWE-bench Pro leaderboard with 77.8% , followed by GLM-5.1 (58.4%) and GPT-5.4 (57.7%).

14 modelsCoding23% of category scoreCurrentUpdated April 10, 2026

According to BenchLM.ai, Claude Mythos Preview leads the SWE-bench Pro benchmark with a score of 77.8%, followed by GLM-5.1 (58.4%) and GPT-5.4 (57.7%). There is significant spread across the leaderboard, making this benchmark effective at differentiating model capabilities.

14 models have been evaluated on SWE-bench Pro. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. Within that category, SWE-bench Pro contributes 23% of the category score, so strong performance here directly affects a model's overall ranking.

About SWE-bench Pro

Year

2026

Tasks

Real-world software engineering

Format

Repository task completion

Difficulty

Frontier coding agent

SWE-bench Pro is the more relevant frontier signal when selecting coding agents in 2026. It reflects more realistic difficulty than the older verified subset.

BenchLM freshness & provenance

Version

SWE-bench Pro 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

Current

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Leaderboard (14 models)

1
77.8%
2
58.4%
3
57.7%
4
57.1%
5
56.8%
6
56.6%
7
56.2%
8
55.6%
9
55.1%
10
53.8%
11
53.4%
12
52.4%
13
51.8%
14
50.9%

FAQ

What does SWE-bench Pro measure?

A stronger coding-agent benchmark than SWE-bench Verified, intended to differentiate frontier models on realistic software engineering work.

Which model scores highest on SWE-bench Pro?

Claude Mythos Preview by Anthropic currently leads with a score of 77.8% on SWE-bench Pro.

How many models are evaluated on SWE-bench Pro?

14 AI models have been evaluated on SWE-bench Pro on BenchLM.

Last updated: April 10, 2026 · BenchLM version SWE-bench Pro 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.