Skip to main content

Artificial Analysis Omniscience Accuracy (AA-Omniscience Accuracy)

A display-only Artificial Analysis knowledge metric for the proportion of correctly answered questions.

Benchmark score on AA-Omniscience Accuracy — May 1, 2026

BenchLM mirrors the published score view for AA-Omniscience Accuracy. Grok 4.3 leads the public snapshot at 34.6%. BenchLM does not use these results to rank models overall.

1 modelsKnowledgeCurrentDisplay onlyUpdated May 1, 2026

About AA-Omniscience Accuracy

Year

2026

Tasks

Knowledge questions

Format

Accuracy

Difficulty

Broad knowledge

BenchLM stores AA-Omniscience Accuracy as a display-only row when a model page publishes the exact Artificial Analysis benchmark card value.

BenchLM freshness & provenance

Version

AA-Omniscience Accuracy 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (1 models)

1
34.6%

FAQ

What does AA-Omniscience Accuracy measure?

A display-only Artificial Analysis knowledge metric for the proportion of correctly answered questions.

Which model scores highest on AA-Omniscience Accuracy?

Grok 4.3 by xAI currently leads with a score of 34.6% on AA-Omniscience Accuracy.

How many models are evaluated on AA-Omniscience Accuracy?

1 AI models have been evaluated on AA-Omniscience Accuracy on BenchLM.

Last updated: May 1, 2026 · BenchLM version AA-Omniscience Accuracy 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.