This reporting page isolates visual reasoning and image understanding from the broader multimodal category. It prioritizes sourced benchmarks for diagrams, grounding, counting, real-world image QA, and multimodal math.
This page ranks models using only sourced image-understanding benchmarks in the reporting family.
According to BenchLM.ai, GPT-5.4 Pro leads this ranking with a score of 94, followed by Claude Mythos Preview (92.7) and Qwen3.5-122B-A10B (83.9). There is a significant gap between the leading models and the rest of the field.
The best open-weight option is Qwen3.5-122B-A10B (ranked #3 with a score of 83.9). Open-weight models are highly competitive in this category — self-hosting is a viable alternative to proprietary APIs.
This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.
GPT-5.4 Pro
OpenAI · 1.05M
Claude Mythos Preview
Anthropic · 1M
Qwen3.5-122B-A10B
Alibaba · 262K
The top model on this sourced reporting-family slice is GPT-5.4 Pro by OpenAI with an average of 94.
The best open-weight model is Qwen3.5-122B-A10B at position #3.
22 models are listed with sourced benchmark coverage in this reporting family.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.