MRCR v2 slice focused on very long contexts at 128K-256K lengths.
BenchLM mirrors the published score view for MRCR v2 128K-256K. GPT-5.5 leads the public snapshot at 87.5% , followed by Claude Opus 4.7 (Adaptive) (59.2%). BenchLM does not use these results to rank models overall.
GPT-5.5
OpenAI
Claude Opus 4.7 (Adaptive)
Anthropic
Year
2026
Tasks
8-needle retrieval tasks
Format
Very-long-context retrieval
Difficulty
Very long-context reasoning
A harder MRCR setting that stresses memory discipline and retrieval deeper into long contexts.
Version
MRCR v2 128K-256K 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
MRCR v2 slice focused on very long contexts at 128K-256K lengths.
GPT-5.5 by OpenAI currently leads with a score of 87.5% on MRCR v2 128K-256K.
2 AI models have been evaluated on MRCR v2 128K-256K on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.