A repo-level code generation and full-project delivery benchmark spanning web, mobile, and simulation-style implementation tasks.
BenchLM mirrors the published score view for VIBE-Pro. MiniMax M2.7 leads the public snapshot at 55.6%. BenchLM does not use these results to rank models overall.
Year
2026
Tasks
Full project delivery tasks
Format
Repository-level implementation benchmark
Difficulty
End-to-end software delivery
MiniMax describes VIBE-Pro as an end-to-end project delivery benchmark that tests whether a model can complete substantial product requirements rather than single-file snippets.
Version
VIBE-Pro 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A repo-level code generation and full-project delivery benchmark spanning web, mobile, and simulation-style implementation tasks.
MiniMax M2.7 by MiniMax currently leads with a score of 55.6% on VIBE-Pro.
1 AI models have been evaluated on VIBE-Pro on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.