A visual question answering benchmark focused on straightforward image-grounded understanding.
As of March 2026, GLM-5V-Turbo leads the SimpleVQA leaderboard with 78.2% , followed by Kimi K2.5 (71.5%) and Claude Opus 4.6 (63.2%).
GLM-5V-Turbo
Zhipu AI
Kimi K2.5
Moonshot AI
Claude Opus 4.6
Anthropic
According to BenchLM.ai, GLM-5V-Turbo leads the SimpleVQA benchmark with a score of 78.2%, followed by Kimi K2.5 (71.5%) and Claude Opus 4.6 (63.2%). The scores show moderate spread, with meaningful differences between the top tier and mid-tier models.
3 models have been evaluated on SimpleVQA. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. SimpleVQA is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Visual QA tasks
Format
Image-grounded question answering
Difficulty
General visual understanding
BenchLM uses SimpleVQA as a display-only visual QA reference rather than a weighted multimodal ranking input.
GLM-5V-TurboVersion
SimpleVQA 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A visual question answering benchmark focused on straightforward image-grounded understanding.
GLM-5V-Turbo by Zhipu AI currently leads with a score of 78.2% on SimpleVQA.
3 AI models have been evaluated on SimpleVQA on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.