Skip to main content

OmniDocBench 1.5

A document understanding benchmark used in frontier-model comparison tables to measure extraction and grounded reasoning quality on complex documents.

Benchmark score on OmniDocBench 1.5 — April 16, 2026

BenchLM mirrors the published score view for OmniDocBench 1.5. Qwen3.6-35B-A3B leads the public snapshot at 89.9%. BenchLM does not use these results to rank models overall.

1 modelsMultimodal & GroundedCurrentDisplay onlyUpdated April 16, 2026

About OmniDocBench 1.5

Year

2026

Tasks

Document understanding tasks

Format

Document understanding benchmark

Difficulty

Grounded document reasoning

BenchLM stores OmniDocBench 1.5 as the higher-is-better score format used in current first-party comparison tables. Earlier low-is-better error-style rows are intentionally not mixed into this benchmark key.

BenchLM freshness & provenance

Version

OmniDocBench 1.5 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (1 models)

1
89.9%

FAQ

What does OmniDocBench 1.5 measure?

A document understanding benchmark used in frontier-model comparison tables to measure extraction and grounded reasoning quality on complex documents.

Which model scores highest on OmniDocBench 1.5?

Qwen3.6-35B-A3B by Alibaba currently leads with a score of 89.9% on OmniDocBench 1.5.

How many models are evaluated on OmniDocBench 1.5?

1 AI models have been evaluated on OmniDocBench 1.5 on BenchLM.

Last updated: April 16, 2026 · BenchLM version OmniDocBench 1.5 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.