Skip to main content

Software Engineering Benchmark Verified (SWE-bench Verified)

A curated, human-verified subset of SWE-bench that tests models on resolving real GitHub issues from popular open-source Python repositories like Django, Flask, and scikit-learn.

Top models on SWE-bench Verified — April 16, 2026

As of April 16, 2026, Claude Mythos Preview leads the SWE-bench Verified leaderboard with 93.9% , followed by Claude Opus 4.7 (87.6%) and GPT-5.3 Codex (85%).

33 modelsCoding13% of category scoreRefreshingUpdated April 16, 2026

According to BenchLM.ai, Claude Mythos Preview leads the SWE-bench Verified benchmark with a score of 93.9%, followed by Claude Opus 4.7 (87.6%) and GPT-5.3 Codex (85%). The scores show moderate spread, with meaningful differences between the top tier and mid-tier models.

33 models have been evaluated on SWE-bench Verified. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. Within that category, SWE-bench Verified contributes 13% of the category score, so strong performance here directly affects a model's overall ranking.

About SWE-bench Verified

Year

2024

Tasks

500 verified issues

Format

Code patch generation

Difficulty

Professional software engineering

SWE-bench Verified is the gold standard for evaluating AI coding agents on real-world software engineering tasks. Each task requires understanding codebases, writing patches, and passing test suites.

BenchLM freshness & provenance

Version

SWE-bench Verified 2024

Refresh cadence

Annual

Staleness state

Refreshing

Question availability

Public benchmark set

Refreshing

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Leaderboard (33 models)

1
93.9%
2
87.6%
3
85%
4
80.9%
5
80.8%
6
80%
7
79.6%
8
78.8%
9
78%
10
77.8%
11
77.4%
12
77.2%
13
76.8%
14
76.8%
15
76.7%
16
76.2%
17
74.8%
18
74.5%
19
73.8%
20
73.4%
21
73.4%
22
73.3%
23
72.7%
24
72.4%
25
72%
26
70.8%
27
69.2%
28
63.8%
29
54.6%
30
49.3%
31
49%
32
42%
33
23.6%

FAQ

What does SWE-bench Verified measure?

A curated, human-verified subset of SWE-bench that tests models on resolving real GitHub issues from popular open-source Python repositories like Django, Flask, and scikit-learn.

Which model scores highest on SWE-bench Verified?

Claude Mythos Preview by Anthropic currently leads with a score of 93.9% on SWE-bench Verified.

How many models are evaluated on SWE-bench Verified?

33 AI models have been evaluated on SWE-bench Verified on BenchLM.

Last updated: April 16, 2026 · BenchLM version SWE-bench Verified 2024

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.