This reporting page isolates long-context performance from the broader reasoning category. It uses a sourced subset of long-context and memory evaluations such as LongBench v2, MRCRv2, AI-Needle, Graphwalks, and document-length multimodal reasoning. Use it when context retention, memory, and long-document handling matter more than abstract reasoning alone.
This page ranks models using only sourced long-context benchmarks in the reporting family rather than the full provisional overall leaderboard.
According to BenchLM.ai, Claude Opus 4.5 leads this ranking with a score of 68.2, followed by Qwen3.5 397B (65.4) and Qwen3.6 Plus (64.5). There is meaningful separation between the top models, suggesting genuine performance differences.
The best open-weight option is Qwen3.5 397B (ranked #2 with a score of 65.4). Open-weight models are highly competitive in this category — self-hosting is a viable alternative to proprietary APIs.
This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.
Claude Opus 4.5
Anthropic · 200K
Qwen3.5 397B
Alibaba · 128K
Qwen3.6 Plus
Alibaba · 1M
The top model on this sourced reporting-family slice is Claude Opus 4.5 by Anthropic with an average of 68.2.
The best open-weight model is Qwen3.5 397B at position #2.
8 models are listed with sourced benchmark coverage in this reporting family.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.