Skip to main content

OpenAI MRCR v2 8-needle 128K-256K (MRCR v2 128K-256K)

MRCR v2 slice focused on very long contexts at 128K-256K lengths.

How BenchLM shows MRCR v2 128K-256K right now

BenchLM is tracking MRCR v2 128K-256K in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.

These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.

4 tracked modelsLocal tracked rowsAwaiting exact-source attachmentsDisplay only

Tracked score on MRCR v2 128K-256K — April 16, 2026

BenchLM mirrors the published tracked score view for MRCR v2 128K-256K. GPT-5.4 leads the public snapshot at 79.3% , followed by GPT-5.4 mini (33.6%) and GPT-5.4 nano (33.1%). BenchLM does not use these results to rank models overall.

4 modelsReasoningCurrentDisplay onlyUpdated April 16, 2026

The published MRCR v2 128K-256K snapshot is tightly clustered at the top: GPT-5.4 sits at 79.3%, while the third row is only 46.2 points behind. The broader top-10 spread is 59.9 points, so the benchmark still separates strong models even when the leaders cluster.

4 models have been evaluated on MRCR v2 128K-256K. The benchmark falls in the Reasoning category. This category carries a 17% weight in BenchLM.ai's overall scoring system. MRCR v2 128K-256K is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About MRCR v2 128K-256K

Year

2026

Tasks

8-needle retrieval tasks

Format

Very-long-context retrieval

Difficulty

Very long-context reasoning

A harder MRCR setting that stresses memory discipline and retrieval deeper into long contexts.

BenchLM freshness & provenance

Version

MRCR v2 128K-256K 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Tracked score table (4 models)

1
GPT-5.4gpt-5-4
79.3%
2
GPT-5.4 minigpt-5-4-mini
33.6%
3
GPT-5.4 nanogpt-5-4-nano
33.1%
4
GPT-5 minigpt-5-mini
19.4%

FAQ

What does MRCR v2 128K-256K measure?

MRCR v2 slice focused on very long contexts at 128K-256K lengths.

Which model leads the published MRCR v2 128K-256K snapshot?

GPT-5.4 currently leads the published MRCR v2 128K-256K snapshot with a tracked score of 79.3%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on MRCR v2 128K-256K?

4 AI models are included in BenchLM's mirrored MRCR v2 128K-256K snapshot, based on the public leaderboard captured on April 16, 2026.

Last updated: April 16, 2026 · mirrored from the public benchmark leaderboard

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.