MRCR v2 slice focused on long-context retrieval at 64K-128K lengths.
BenchLM is tracking MRCR v2 64K-128K in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.
These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.
BenchLM mirrors the published tracked score view for MRCR v2 64K-128K. GPT-5.4 leads the public snapshot at 86% , followed by GPT-5.4 mini (47.7%) and GPT-5.4 nano (44.2%). BenchLM does not use these results to rank models overall.
GPT-5.4
OpenAI
gpt-5-4
GPT-5.4 mini
OpenAI
gpt-5-4-mini
GPT-5.4 nano
OpenAI
gpt-5-4-nano
The published MRCR v2 64K-128K snapshot is tightly clustered at the top: GPT-5.4 sits at 86%, while the third row is only 41.8 points behind. The broader top-10 spread is 50.9 points, so the benchmark still separates strong models even when the leaders cluster.
4 models have been evaluated on MRCR v2 64K-128K. The benchmark falls in the Reasoning category. This category carries a 17% weight in BenchLM.ai's overall scoring system. MRCR v2 64K-128K is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
8-needle retrieval tasks
Format
Long-context retrieval
Difficulty
Long-context reasoning
Measures whether models can recover the right details when multiple relevant items are buried in long contexts.
Version
MRCR v2 64K-128K 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
MRCR v2 slice focused on long-context retrieval at 64K-128K lengths.
GPT-5.4 currently leads the published MRCR v2 64K-128K snapshot with a tracked score of 86%. BenchLM shows this benchmark for display only and does not use it in overall rankings.
4 AI models are included in BenchLM's mirrored MRCR v2 64K-128K snapshot, based on the public leaderboard captured on April 16, 2026.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.