Skip to main content

Software Engineering Benchmark Verified (SWE-bench Verified)

A curated, human-verified subset of SWE-bench that tests models on resolving real GitHub issues from popular open-source Python repositories like Django, Flask, and scikit-learn.

Top models on SWE-bench Verified — April 21, 2026

As of April 21, 2026, Claude Mythos Preview leads the SWE-bench Verified leaderboard with 93.9% , followed by Claude Opus 4.7 (87.6%) and GPT-5.3 Codex (85%).

34 modelsCoding13% of category scoreRefreshingUpdated April 21, 2026

According to BenchLM.ai, Claude Mythos Preview leads the SWE-bench Verified benchmark with a score of 93.9%, followed by Claude Opus 4.7 (87.6%) and GPT-5.3 Codex (85%). The scores show moderate spread, with meaningful differences between the top tier and mid-tier models.

34 models have been evaluated on SWE-bench Verified. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. Within that category, SWE-bench Verified contributes 13% of the category score, so strong performance here directly affects a model's overall ranking.

About SWE-bench Verified

Year

2024

Tasks

500 verified issues

Format

Code patch generation

Difficulty

Professional software engineering

SWE-bench Verified is the gold standard for evaluating AI coding agents on real-world software engineering tasks. Each task requires understanding codebases, writing patches, and passing test suites.

BenchLM freshness & provenance

Version

SWE-bench Verified 2024

Refresh cadence

Annual

Staleness state

Refreshing

Question availability

Public benchmark set

Refreshing

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Leaderboard (34 models)

1
93.9%
2
87.6%
3
85%
4
80.9%
5
80.8%
6
80.2%
7
80%
8
79.6%
9
78.8%
10
78%
11
77.8%
12
77.4%
13
77.2%
14
76.8%
15
76.8%
16
76.7%
17
76.2%
18
74.8%
19
74.5%
20
73.8%
21
73.4%
22
73.4%
23
73.3%
24
72.7%
25
72.4%
26
72%
27
70.8%
28
69.2%
29
63.8%
30
54.6%
31
49.3%
32
49%
33
42%
34
23.6%

FAQ

What does SWE-bench Verified measure?

A curated, human-verified subset of SWE-bench that tests models on resolving real GitHub issues from popular open-source Python repositories like Django, Flask, and scikit-learn.

Which model scores highest on SWE-bench Verified?

Claude Mythos Preview by Anthropic currently leads with a score of 93.9% on SWE-bench Verified.

How many models are evaluated on SWE-bench Verified?

34 AI models have been evaluated on SWE-bench Verified on BenchLM.

Last updated: April 21, 2026 · BenchLM version SWE-bench Verified 2024

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.