Skip to main content

RefCOCO average (RefCOCO (avg))

A referring-expression grounding benchmark averaged across RefCOCO variants to test whether a model can localize described objects correctly.

How BenchLM shows RefCOCO (avg) right now

BenchLM is tracking RefCOCO (avg) in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.

These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.

4 tracked modelsLocal tracked rowsAwaiting exact-source attachmentsDisplay only

Tracked score on RefCOCO (avg) — April 10, 2026

BenchLM mirrors the published tracked score view for RefCOCO (avg). Qwen3.6 Plus leads the public snapshot at 93.5% , followed by Qwen3.5 397B (92.3%) and Kimi K2.5 (87.8%). BenchLM does not use these results to rank models overall.

4 modelsMultimodal & GroundedCurrentDisplay onlyUpdated April 10, 2026

The published RefCOCO (avg) snapshot is tightly clustered at the top: Qwen3.6 Plus sits at 93.5%, while the third row is only 5.7 points behind. The broader top-10 spread is 9.4 points, so many of the published scores sit in a relatively narrow band.

4 models have been evaluated on RefCOCO (avg). The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. RefCOCO (avg) is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About RefCOCO (avg)

Year

2026

Tasks

Referring-expression grounding

Format

Grounded visual localization

Difficulty

Fine-grained visual grounding

RefCOCO-style tasks matter for grounding-heavy assistants because they measure whether the model can map language to specific objects or regions instead of only answering abstract questions.

BenchLM freshness & provenance

Version

RefCOCO (avg) 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Tracked score table (4 models)

1
Qwen3.6 Plusqwen3-6-plus
93.5%
2
Qwen3.5 397Bqwen3-5-397b
92.3%
3
Kimi K2.5kimi-k2-5
87.8%
4
Gemini 3 Progemini-3-pro
84.1%

FAQ

What does RefCOCO (avg) measure?

A referring-expression grounding benchmark averaged across RefCOCO variants to test whether a model can localize described objects correctly.

Which model leads the published RefCOCO (avg) snapshot?

Qwen3.6 Plus currently leads the published RefCOCO (avg) snapshot with a tracked score of 93.5%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on RefCOCO (avg)?

4 AI models are included in BenchLM's mirrored RefCOCO (avg) snapshot, based on the public leaderboard captured on April 10, 2026.

Last updated: April 10, 2026 · mirrored from the public benchmark leaderboard

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.