Skip to main content

AI2D test split (AI2D_TEST)

A diagram understanding benchmark focused on scientific and educational visual question answering.

How BenchLM shows AI2D_TEST right now

BenchLM is tracking AI2D_TEST in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.

These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.

6 tracked modelsLocal tracked rowsAwaiting exact-source attachmentsDisplay only

Tracked score on AI2D_TEST — April 10, 2026

BenchLM mirrors the published tracked score view for AI2D_TEST. Qwen3.6 Plus leads the public snapshot at 94.4% , followed by Gemini 3 Pro (94.1%) and Qwen3.5 397B (93.9%). BenchLM does not use these results to rank models overall.

6 modelsMultimodal & GroundedCurrentDisplay onlyUpdated April 10, 2026

The published AI2D_TEST snapshot is tightly clustered at the top: Qwen3.6 Plus sits at 94.4%, while the third row is only 0.5 points behind. The broader top-10 spread is 6.7 points, so many of the published scores sit in a relatively narrow band.

6 models have been evaluated on AI2D_TEST. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. AI2D_TEST is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About AI2D_TEST

Year

2026

Tasks

Diagram understanding

Format

Diagram-grounded QA

Difficulty

Structured visual reasoning

AI2D-style tasks matter because diagrams compress structure differently from photos or office documents. They test whether a model can parse arrows, labels, and spatial relations in technical illustrations.

BenchLM freshness & provenance

Version

AI2D_TEST 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Tracked score table (6 models)

1
Qwen3.6 Plusqwen3-6-plus
94.4%
2
Gemini 3 Progemini-3-pro
94.1%
3
Qwen3.5 397Bqwen3-5-397b
93.9%
4
GPT-5.2gpt-5-2
92.2%
5
Kimi K2.5kimi-k2-5
90.8%
6
Claude Opus 4.5claude-opus-4-5
87.7%

FAQ

What does AI2D_TEST measure?

A diagram understanding benchmark focused on scientific and educational visual question answering.

Which model leads the published AI2D_TEST snapshot?

Qwen3.6 Plus currently leads the published AI2D_TEST snapshot with a tracked score of 94.4%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on AI2D_TEST?

6 AI models are included in BenchLM's mirrored AI2D_TEST snapshot, based on the public leaderboard captured on April 10, 2026.

Last updated: April 10, 2026 · mirrored from the public benchmark leaderboard

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.