Skip to main content

SWE Multilingual

A multilingual software-engineering benchmark for real-world code issue resolution across multiple programming languages.

Benchmark score on SWE Multilingual — April 10, 2026

BenchLM mirrors the published score view for SWE Multilingual. Claude Opus 4.5 leads the public snapshot at 77.5% , followed by MiniMax M2.7 (76.5%) and Qwen3.6 Plus (73.8%). BenchLM does not use these results to rank models overall.

5 modelsCodingCurrentDisplay onlyUpdated April 10, 2026

The published SWE Multilingual snapshot is tightly clustered at the top: Claude Opus 4.5 sits at 77.5%, while the third row is only 3.7 points behind. The broader top-10 spread is 4.5 points, so many of the published scores sit in a relatively narrow band.

5 models have been evaluated on SWE Multilingual. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. SWE Multilingual is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About SWE Multilingual

Year

2026

Tasks

Multilingual software-engineering tasks

Format

Repository task completion

Difficulty

Professional software engineering

MiniMax reports SWE Multilingual as a coding benchmark focused on multilingual software-engineering tasks beyond single-language Python issue fixing.

BenchLM freshness & provenance

Version

SWE Multilingual 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (5 models)

1
77.5%
2
76.5%
3
73.8%
4
73.3%
5
73%

FAQ

What does SWE Multilingual measure?

A multilingual software-engineering benchmark for real-world code issue resolution across multiple programming languages.

Which model scores highest on SWE Multilingual?

Claude Opus 4.5 by Anthropic currently leads with a score of 77.5% on SWE Multilingual.

How many models are evaluated on SWE Multilingual?

5 AI models have been evaluated on SWE Multilingual on BenchLM.

Last updated: April 10, 2026 · BenchLM version SWE Multilingual 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.