A referring-expression grounding benchmark averaged across RefCOCO variants to test whether a model can localize described objects correctly.
BenchLM is tracking RefCOCO (avg) in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.
These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.
BenchLM mirrors the published tracked score view for RefCOCO (avg). Qwen3.6 Plus leads the public snapshot at 93.5% , followed by Qwen3.5 397B (92.3%) and Kimi K2.5 (87.8%). BenchLM does not use these results to rank models overall.
Qwen3.6 Plus
Alibaba
qwen3-6-plus
Qwen3.5 397B
Alibaba
qwen3-5-397b
Kimi K2.5
Moonshot AI
kimi-k2-5
The published RefCOCO (avg) snapshot is tightly clustered at the top: Qwen3.6 Plus sits at 93.5%, while the third row is only 5.7 points behind. The broader top-10 spread is 9.4 points, so many of the published scores sit in a relatively narrow band.
4 models have been evaluated on RefCOCO (avg). The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. RefCOCO (avg) is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Referring-expression grounding
Format
Grounded visual localization
Difficulty
Fine-grained visual grounding
RefCOCO-style tasks matter for grounding-heavy assistants because they measure whether the model can map language to specific objects or regions instead of only answering abstract questions.
Version
RefCOCO (avg) 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A referring-expression grounding benchmark averaged across RefCOCO variants to test whether a model can localize described objects correctly.
Qwen3.6 Plus currently leads the published RefCOCO (avg) snapshot with a tracked score of 93.5%. BenchLM shows this benchmark for display only and does not use it in overall rankings.
4 AI models are included in BenchLM's mirrored RefCOCO (avg) snapshot, based on the public leaderboard captured on April 10, 2026.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.