A general visual question-answering benchmark used in provider tables for real-image reasoning quality.
As of March 2026, Qwen3.5 397B leads the MStar leaderboard with 83.8% , followed by Qwen3.6 Plus (83.3%) and Gemini 3 Pro (83.1%).
Qwen3.5 397B
Alibaba
Qwen3.6 Plus
Alibaba
Gemini 3 Pro
According to BenchLM.ai, Qwen3.5 397B leads the MStar benchmark with a score of 83.8%, followed by Qwen3.6 Plus (83.3%) and Gemini 3 Pro (83.1%). The top models are clustered within 0.7 points, suggesting this benchmark is nearing saturation for frontier models.
6 models have been evaluated on MStar. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MStar is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Real-image visual QA
Format
Image-grounded QA
Difficulty
General visual reasoning
MStar sits between broad multimodal reasoning and grounded VQA. It is useful for checking whether a model can answer real-image questions without the stronger domain structure of office or academic benchmarks.
Qwen3.6 launch benchmarksVersion
MStar 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A general visual question-answering benchmark used in provider tables for real-image reasoning quality.
Qwen3.5 397B by Alibaba currently leads with a score of 83.8% on MStar.
6 AI models have been evaluated on MStar on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.