A long-context benchmark that measures whether models can actually use extended context windows for reasoning and retrieval.
As of April 29, 2026, Claude Opus 4.5 leads the LongBench v2 leaderboard with 64.4% , followed by Qwen3.5 397B (63.2%) and Qwen3.6 Plus (62%).
Claude Opus 4.5
Anthropic
Qwen3.5 397B
Alibaba
Qwen3.6 Plus
Alibaba
According to BenchLM.ai, Claude Opus 4.5 leads the LongBench v2 benchmark with a score of 64.4%, followed by Qwen3.5 397B (63.2%) and Qwen3.6 Plus (62%). The top models are clustered within 2.4 points, suggesting this benchmark is nearing saturation for frontier models.
10 models have been evaluated on LongBench v2. The benchmark falls in the Reasoning category. This category carries a 17% weight in BenchLM.ai's overall scoring system. Within that category, LongBench v2 contributes 30% of the category score, so strong performance here directly affects a model's overall ranking.
Year
2025
Tasks
Long-context tasks
Format
Extended-context retrieval and reasoning
Difficulty
Hard long-context
LongBench v2 is useful because context-window size alone is not a capability. It measures whether a model can retain, retrieve, and reason over long inputs effectively.
Version
LongBench v2 2025
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A long-context benchmark that measures whether models can actually use extended context windows for reasoning and retrieval.
Claude Opus 4.5 by Anthropic currently leads with a score of 64.4% on LongBench v2.
10 AI models have been evaluated on LongBench v2 on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.