Skip to main content

olmOCR-Bench (olmOCR)

An end-to-end document understanding benchmark over long, layout-rich PDFs with tables, equations, headers, footnotes, and multi-column flows.

Benchmark score on olmOCR — May 12, 2026

BenchLM mirrors the published score view for olmOCR. Interfaze Beta leads the public snapshot at 85.7%. BenchLM does not use these results to rank models overall.

1 modelsMultimodal & GroundedCurrentDisplay onlyUpdated May 12, 2026

About olmOCR

Year

2025

Tasks

Layout-rich PDF understanding

Format

Mean accuracy

Difficulty

Complex document processing

olmOCR-Bench tests whether a system preserves reading order and document structure, not only character-level OCR. BenchLM tracks Interfaze's reported mean score as a display-only document AI benchmark.

BenchLM freshness & provenance

Version

olmOCR 2025

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (1 models)

1
85.7%

FAQ

What does olmOCR measure?

An end-to-end document understanding benchmark over long, layout-rich PDFs with tables, equations, headers, footnotes, and multi-column flows.

Which model scores highest on olmOCR?

Interfaze Beta by Interfaze currently leads with a score of 85.7% on olmOCR.

How many models are evaluated on olmOCR?

1 AI models have been evaluated on olmOCR on BenchLM.

Last updated: May 12, 2026 · BenchLM version olmOCR 2025

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.