A long-context retrieval benchmark that measures whether a model can recover relevant information embedded deep inside very long contexts.
BenchLM mirrors the published score view for AI-Needle. Claude Opus 4.5 leads the public snapshot at 74% , followed by Qwen3.5 397B (68.7%) and Qwen3.6 Plus (68.3%). BenchLM does not use these results to rank models overall.
Claude Opus 4.5
Anthropic
Qwen3.5 397B
Alibaba
Qwen3.6 Plus
Alibaba
The published AI-Needle snapshot is tightly clustered at the top: Claude Opus 4.5 sits at 74%, while the third row is only 5.7 points behind. The broader top-10 spread is 10.7 points, so the benchmark still separates strong models even when the leaders cluster.
4 models have been evaluated on AI-Needle. The benchmark falls in the Reasoning category. This category carries a 17% weight in BenchLM.ai's overall scoring system. AI-Needle is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Long-context retrieval
Format
Needle-in-a-haystack recall
Difficulty
Long-context memory
AI-Needle is useful for testing whether very large context windows are actually usable rather than just headline numbers. It rewards precise recall under distractors and long-document clutter.
Version
AI-Needle 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A long-context retrieval benchmark that measures whether a model can recover relevant information embedded deep inside very long contexts.
Claude Opus 4.5 by Anthropic currently leads with a score of 74% on AI-Needle.
4 AI models have been evaluated on AI-Needle on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.