A visual question answering benchmark focused on straightforward image-grounded understanding.
BenchLM mirrors the published score view for SimpleVQA. Gemini 3.1 Pro leads the public snapshot at 72.4% , followed by Muse Spark (71.3%) and GPT-5.4 (61.1%). BenchLM does not use these results to rank models overall.
Gemini 3.1 Pro
Muse Spark
Meta
GPT-5.4
OpenAI
The published SimpleVQA snapshot is tightly clustered at the top: Gemini 3.1 Pro sits at 72.4%, while the third row is only 11.3 points behind. The broader top-10 spread is 15.0 points, so the benchmark still separates strong models even when the leaders cluster.
4 models have been evaluated on SimpleVQA. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. SimpleVQA is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Visual QA tasks
Format
Image-grounded question answering
Difficulty
General visual understanding
BenchLM uses SimpleVQA as a display-only visual QA reference rather than a weighted multimodal ranking input.
Version
SimpleVQA 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A visual question answering benchmark focused on straightforward image-grounded understanding.
Gemini 3.1 Pro by Google currently leads with a score of 72.4% on SimpleVQA.
4 AI models have been evaluated on SimpleVQA on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.