Skip to main content

OCRBench V2

A native OCR benchmark for reading text from images across multilingual scripts, low-quality scans, handwriting, structured layouts, charts, and screenshots.

Benchmark score on OCRBench V2 — May 12, 2026

BenchLM mirrors the published score view for OCRBench V2. Interfaze Beta leads the public snapshot at 70.7%. BenchLM does not use these results to rank models overall.

1 modelsMultimodal & GroundedCurrentDisplay onlyUpdated May 12, 2026

About OCRBench V2

Year

2025

Tasks

Image OCR tasks

Format

Accuracy

Difficulty

Native visual text understanding

OCRBench V2 evaluates whether multimodal models can extract visual text directly from images before downstream reasoning or structure extraction. BenchLM stores Interfaze's reported score as a display-only OCR row.

BenchLM freshness & provenance

Version

OCRBench V2 2025

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (1 models)

1
70.7%

FAQ

What does OCRBench V2 measure?

A native OCR benchmark for reading text from images across multilingual scripts, low-quality scans, handwriting, structured layouts, charts, and screenshots.

Which model scores highest on OCRBench V2?

Interfaze Beta by Interfaze currently leads with a score of 70.7% on OCRBench V2.

How many models are evaluated on OCRBench V2?

1 AI models have been evaluated on OCRBench V2 on BenchLM.

Last updated: May 12, 2026 · BenchLM version OCRBench V2 2025

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.