A vision-centric benchmark for high-level multimodal reasoning and perception quality.
BenchLM mirrors the published score view for V*. Kimi K2.6 leads the public snapshot at 96.9% , followed by Qwen3.6 Plus (96.9%) and Qwen3.5 397B (95.8%). BenchLM does not use these results to rank models overall.
Kimi K2.6
Moonshot AI
Qwen3.6 Plus
Alibaba
Qwen3.5 397B
Alibaba
The published V* snapshot is tightly clustered at the top: Kimi K2.6 sits at 96.9%, while the third row is only 1.1 points behind. The broader top-10 spread is 29.9 points, so the benchmark still separates strong models even when the leaders cluster.
10 models have been evaluated on V*. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. V* is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Frontier multimodal reasoning tasks
Format
Vision-centric reasoning benchmark
Difficulty
Frontier multimodal
BenchLM tracks V* as a display-only frontier multimodal benchmark reference outside the current weighted schema.
Version
V* 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A vision-centric benchmark for high-level multimodal reasoning and perception quality.
Kimi K2.6 by Moonshot AI currently leads with a score of 96.9% on V*.
10 AI models have been evaluated on V* on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.