This reporting page isolates long-context performance from the broader reasoning category. It uses a sourced subset of long-context and memory evaluations such as LongBench v2, MRCRv2, AI-Needle, Graphwalks, and document-length multimodal reasoning. Use it when context retention, memory, and long-document handling matter more than abstract reasoning alone.
This page ranks models using only sourced long-context benchmarks in the reporting family rather than the full provisional overall leaderboard.
Bottom line: Most models claim 128K+ context, but actual long-context performance varies wildly. These benchmarks test what models can really do with their context window.
According to BenchLM.ai, Claude Opus 4.5 leads this ranking with a score of 68.2, followed by Qwen3.5 397B (65.4) and Qwen3.6 Plus (64.5). There is meaningful separation between the top models, suggesting genuine performance differences.
The best open-weight option is Qwen3.5 397B (ranked #2 with a score of 65.4). Open-weight models are highly competitive in this category — self-hosting is a viable alternative to proprietary APIs.
This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.
Claude Opus 4.5
Anthropic · 200K
Qwen3.5 397B
Alibaba · 128K
Qwen3.6 Plus
Alibaba · 1M
Get notified when models move. One email a week with what changed and why.
Free. No spam. Unsubscribe anytime.
The top model on this sourced reporting-family slice is Claude Opus 4.5 by Anthropic with an average of 68.2.
The best open-weight model is Qwen3.5 397B at position #2.
4 models are listed with sourced benchmark coverage in this reporting family.
This is a reporting family ranking, not a weighted category. It averages sourced long-context benchmarks to give a focused view of context-window performance.
Models must have sourced results on at least a quarter of the benchmarks in this family to be included. Coverage varies — a model with 2 benchmark scores is less reliable than one with 5.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.