Vals.ai benchmark for evaluating whether models can build complete web applications from natural language specifications in a production-like development environment.
BenchLM mirrors the published score view for Vibe Code Bench. Claude Opus 4.7 leads the public snapshot at 71.00% , followed by GPT-5.5 (69.85%) and GPT-5.4 (67.42%). BenchLM does not use these results to rank models overall.
Claude Opus 4.7
Anthropic
GPT-5.5
OpenAI
GPT-5.4
OpenAI
The published Vibe Code Bench snapshot is tightly clustered at the top: Claude Opus 4.7 sits at 71.00%, while the third row is only 3.58 points behind. The broader top-10 spread is 23.03 points, so the benchmark still separates strong models even when the leaders cluster.
40 models have been evaluated on Vibe Code Bench. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. Vibe Code Bench is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
End-to-end web application builds
Format
Full-stack app implementation benchmark
Difficulty
End-to-end software delivery
Vibe Code Bench v1.1 asks models to build full web apps with services such as Supabase, Stripe test mode, email, browsing, and file editing available. The score is overall application pass accuracy across private end-to-end app tasks.
Version
Vibe Code Bench 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
Vals.ai benchmark for evaluating whether models can build complete web applications from natural language specifications in a production-like development environment.
Claude Opus 4.7 by Anthropic currently leads with a score of 71.00% on Vibe Code Bench.
40 AI models have been evaluated on Vibe Code Bench on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.