A grounded visual reasoning benchmark focused on evidence-based question answering over real images.
BenchLM mirrors the published score view for ERQA. Gemini 3.1 Pro leads the public snapshot at 69.4% , followed by GPT-5.4 (65.4%) and Muse Spark (64.7%). BenchLM does not use these results to rank models overall.
Gemini 3.1 Pro
GPT-5.4
OpenAI
Muse Spark
Meta
The published ERQA snapshot is tightly clustered at the top: Gemini 3.1 Pro sits at 69.4%, while the third row is only 4.7 points behind. The broader top-10 spread is 17.8 points, so the benchmark still separates strong models even when the leaders cluster.
5 models have been evaluated on ERQA. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. ERQA is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Evidence-based visual QA
Format
Grounded image reasoning
Difficulty
Grounded multimodal reasoning
ERQA is useful as a grounded reasoning check because it emphasizes answer correctness tied to visual evidence rather than fluent but ungrounded descriptions.
Version
ERQA 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A grounded visual reasoning benchmark focused on evidence-based question answering over real images.
Gemini 3.1 Pro by Google currently leads with a score of 69.4% on ERQA.
5 AI models have been evaluated on ERQA on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.