Skip to main content

ProgramBench: Can Language Models Rebuild Programs From Scratch? (ProgramBench)

A cleanroom software-engineering benchmark where agents receive only a compiled executable and documentation, then must architect and implement a complete codebase that reproduces the original program's behavior.

How BenchLM shows ProgramBench

BenchLM mirrors the public ProgramBench leaderboard from May 5, 2026. The official benchmark uses mini-SWE-agent on 200 cleanroom program-reconstruction tasks and reports 0% fully resolved tasks for every evaluated model.

Because the primary resolved metric is currently tied at zero, BenchLM displays the published almost-resolved rate as the visible score. Almost resolved means the agent passed at least 95% of the hidden behavioral tests for a task. ProgramBench remains display only on BenchLM and is excluded from weighted coding and overall rankings until public model coverage broadens.

9 models200 tasks248,853 behavioral tests0% fully resolvedDisplay only

Almost resolved rate on ProgramBench — May 5, 2026

BenchLM mirrors the published almost resolved rate view for ProgramBench. Claude Opus 4.7 leads the public snapshot at 3.0% , followed by Claude Opus 4.6 (2.5%) and Claude Sonnet 4.6 (1.0%). BenchLM does not use these results to rank models overall.

9 modelsCodingCurrentDisplay onlyUpdated May 5, 2026

The published ProgramBench snapshot is tightly clustered at the top: Claude Opus 4.7 sits at 3.0%, while the third row is only 2.0 points behind. The broader top-10 spread is 3.0 points, so many of the published scores sit in a relatively narrow band.

9 models have been evaluated on ProgramBench. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. ProgramBench is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About ProgramBench

Year

2026

Tasks

200 program reconstruction tasks

Format

Cleanroom executable reimplementation

Difficulty

Full-repository software architecture

ProgramBench turns open-source projects into cleanroom reconstruction tasks. Each task starts from an execute-only binary and usage documentation, with no source code, internet, decompilation, or prescribed skeleton. Evaluation uses hidden behavioral tests generated through agent-driven fuzzing. BenchLM shows ProgramBench as display-only because all current public rows are tied at 0% fully resolved and the visible score is the auxiliary almost-resolved metric.

BenchLM freshness & provenance

Version

ProgramBench 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public task metadata and hidden behavioral tests

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Almost resolved rate table (9 models)

1
Claude Opus 4.7claude-opus-4-7-programbench
3.0%
2
Claude Opus 4.6claude-opus-4-6-programbench
2.5%
3
Claude Sonnet 4.6claude-sonnet-4-6-programbench
1.0%
4
GPT 5.4gpt-5-4-programbench
0.0%
5
Gemini 3.1 Progemini-3-1-pro-programbench
0.0%
6
Gemini 3 Flashgemini-3-flash-programbench
0.0%
7
Claude Haiku 4.5claude-haiku-4-5-programbench
0.0%
8
GPT 5.4 minigpt-5-4-mini-programbench
0.0%
9
GPT 5 minigpt-5-mini-programbench
0.0%

FAQ

What does ProgramBench measure?

A cleanroom software-engineering benchmark where agents receive only a compiled executable and documentation, then must architect and implement a complete codebase that reproduces the original program's behavior.

Which model leads the published ProgramBench snapshot?

Claude Opus 4.7 currently leads the published ProgramBench snapshot with a almost resolved rate of 3.0%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on ProgramBench?

9 AI models are included in BenchLM's mirrored ProgramBench snapshot, based on the public leaderboard captured on May 5, 2026.

Last updated: May 5, 2026 · mirrored from the public benchmark leaderboard

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.