A display-only Artificial Analysis factuality metric for the rate of incorrect answers among non-correct responses.
BenchLM mirrors the published score view for AA-Omniscience Hallucination Rate. Grok 4.3 leads the public snapshot at 75.0%. BenchLM does not use these results to rank models overall.
Year
2026
Tasks
Knowledge questions
Format
Hallucination rate
Difficulty
Factuality
BenchLM marks this row lower-is-better because a lower hallucination rate is preferable, even though the OpenRouter card displays the raw percentage.
Version
AA-Omniscience Hallucination Rate 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A display-only Artificial Analysis factuality metric for the rate of incorrect answers among non-correct responses.
Grok 4.3 by xAI currently leads with a score of 75.0% on AA-Omniscience Hallucination Rate.
1 AI models have been evaluated on AA-Omniscience Hallucination Rate on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.