Skip to main content

Vibe Code Bench v1.1 (Vibe Code Bench)

Vals.ai benchmark for evaluating whether models can build complete web applications from natural language specifications in a production-like development environment.

Benchmark score on Vibe Code Bench — April 24, 2026

BenchLM mirrors the published score view for Vibe Code Bench. Claude Opus 4.7 leads the public snapshot at 71.00% , followed by GPT-5.5 (69.85%) and GPT-5.4 (67.42%). BenchLM does not use these results to rank models overall.

40 modelsCodingCurrentDisplay onlyUpdated April 24, 2026

The published Vibe Code Bench snapshot is tightly clustered at the top: Claude Opus 4.7 sits at 71.00%, while the third row is only 3.58 points behind. The broader top-10 spread is 23.03 points, so the benchmark still separates strong models even when the leaders cluster.

40 models have been evaluated on Vibe Code Bench. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. Vibe Code Bench is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About Vibe Code Bench

Year

2026

Tasks

End-to-end web application builds

Format

Full-stack app implementation benchmark

Difficulty

End-to-end software delivery

Vibe Code Bench v1.1 asks models to build full web apps with services such as Supabase, Stripe test mode, email, browsing, and file editing available. The score is overall application pass accuracy across private end-to-end app tasks.

BenchLM freshness & provenance

Version

Vibe Code Bench 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (40 models)

1
71.00%
2
69.85%
3
67.42%
4
61.77%
5
57.57%
6
53.50%
7
53.50%
8
51.48%
9
49.93%
10
47.97%
11
37.91%
12
37.89%
13
32.03%
14
31.46%
15
27.04%
16
26.10%
17
25.56%
18
24.61%
19
23.36%
20
22.62%
21
22.17%
22
20.63%
23
20.20%
24
20.09%
25
19.67%
26
17.54%
27
15.74%
28
14.85%
29
14.30%
30
14.17%
31
13.12%
32
11.39%
33
5.11%
34
4.06%
35
3.51%
36
3.09%
38
0.40%
39
0.00%
40
0.00%

FAQ

What does Vibe Code Bench measure?

Vals.ai benchmark for evaluating whether models can build complete web applications from natural language specifications in a production-like development environment.

Which model scores highest on Vibe Code Bench?

Claude Opus 4.7 by Anthropic currently leads with a score of 71.00% on Vibe Code Bench.

How many models are evaluated on Vibe Code Bench?

40 AI models have been evaluated on Vibe Code Bench on BenchLM.

Last updated: April 24, 2026 · BenchLM version Vibe Code Bench 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.