A cleanroom software-engineering benchmark where agents receive only a compiled executable and documentation, then must architect and implement a complete codebase that reproduces the original program's behavior.
BenchLM mirrors the public ProgramBench leaderboard from May 5, 2026. The official benchmark uses mini-SWE-agent on 200 cleanroom program-reconstruction tasks and reports 0% fully resolved tasks for every evaluated model.
Because the primary resolved metric is currently tied at zero, BenchLM displays the published almost-resolved rate as the visible score. Almost resolved means the agent passed at least 95% of the hidden behavioral tests for a task. ProgramBench remains display only on BenchLM and is excluded from weighted coding and overall rankings until public model coverage broadens.
BenchLM mirrors the published almost resolved rate view for ProgramBench. Claude Opus 4.7 leads the public snapshot at 3.0% , followed by Claude Opus 4.6 (2.5%) and Claude Sonnet 4.6 (1.0%). BenchLM does not use these results to rank models overall.
Claude Opus 4.7
Anthropic
claude-opus-4-7-programbench
Claude Opus 4.6
Anthropic
claude-opus-4-6-programbench
Claude Sonnet 4.6
Anthropic
claude-sonnet-4-6-programbench
The published ProgramBench snapshot is tightly clustered at the top: Claude Opus 4.7 sits at 3.0%, while the third row is only 2.0 points behind. The broader top-10 spread is 3.0 points, so many of the published scores sit in a relatively narrow band.
9 models have been evaluated on ProgramBench. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. ProgramBench is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
200 program reconstruction tasks
Format
Cleanroom executable reimplementation
Difficulty
Full-repository software architecture
ProgramBench turns open-source projects into cleanroom reconstruction tasks. Each task starts from an execute-only binary and usage documentation, with no source code, internet, decompilation, or prescribed skeleton. Evaluation uses hidden behavioral tests generated through agent-driven fuzzing. BenchLM shows ProgramBench as display-only because all current public rows are tied at 0% fully resolved and the visible score is the auxiliary almost-resolved metric.
Version
ProgramBench 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public task metadata and hidden behavioral tests
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A cleanroom software-engineering benchmark where agents receive only a compiled executable and documentation, then must architect and implement a complete codebase that reproduces the original program's behavior.
Claude Opus 4.7 currently leads the published ProgramBench snapshot with a almost resolved rate of 3.0%. BenchLM shows this benchmark for display only and does not use it in overall rankings.
9 AI models are included in BenchLM's mirrored ProgramBench snapshot, based on the public leaderboard captured on May 5, 2026.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.