A multi-language software-engineering benchmark that measures repository-level bug fixing and implementation across more than one programming ecosystem.
BenchLM mirrors the published score view for Multi-SWE Bench. MiniMax M2.7 leads the public snapshot at 52.7%. BenchLM does not use these results to rank models overall.
Year
2026
Tasks
Multi-language repo tasks
Format
Repository task completion
Difficulty
Professional software engineering
MiniMax positions Multi-SWE Bench as a benchmark closer to real engineering work than isolated code generation, emphasizing multi-language repository workflows.
Version
Multi-SWE Bench 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A multi-language software-engineering benchmark that measures repository-level bug fixing and implementation across more than one programming ecosystem.
MiniMax M2.7 by MiniMax currently leads with a score of 52.7% on Multi-SWE Bench.
1 AI models have been evaluated on Multi-SWE Bench on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.