Skip to main content

SuperGPQA: Scaling LLM Evaluation Across 285 Graduate Disciplines (SuperGPQA)

An expanded version of GPQA that evaluates graduate-level knowledge and reasoning capabilities across 285 disciplines, providing comprehensive coverage of academic domains.

Top models on SuperGPQA — April 20, 2026

As of April 20, 2026, Claude Opus 4.6 leads the SuperGPQA leaderboard with 95% , followed by Claude Sonnet 4.6 (95%) and Qwen 3.6 Max (preview) (73.9%).

13 modelsKnowledge12% of category scoreCurrentUpdated April 20, 2026

According to BenchLM.ai, Claude Opus 4.6 leads the SuperGPQA benchmark with a score of 95%, followed by Claude Sonnet 4.6 (95%) and Qwen 3.6 Max (preview) (73.9%). There is significant spread across the leaderboard, making this benchmark effective at differentiating model capabilities.

13 models have been evaluated on SuperGPQA. The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. Within that category, SuperGPQA contributes 12% of the category score, so strong performance here directly affects a model's overall ranking.

About SuperGPQA

Year

2025

Tasks

285 disciplines

Format

Multiple choice questions

Difficulty

Graduate level

SuperGPQA significantly expands the scope of graduate-level evaluation by covering 285 disciplines compared to GPQA's focus on 3 subjects. It maintains the same rigorous standards while providing broader coverage of academic knowledge.

BenchLM freshness & provenance

Version

SuperGPQA 2025

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

Current

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Leaderboard (13 models)

1
95%
2
95%
3
73.9%
4
71.6%
5
70.6%
6
70.4%
7
69.2%
8
67.1%
9
66.8%
10
65.6%
11
64.7%
12
63.4%
13
62.6%

FAQ

What does SuperGPQA measure?

An expanded version of GPQA that evaluates graduate-level knowledge and reasoning capabilities across 285 disciplines, providing comprehensive coverage of academic domains.

Which model scores highest on SuperGPQA?

Claude Opus 4.6 by Anthropic currently leads with a score of 95% on SuperGPQA.

How many models are evaluated on SuperGPQA?

13 AI models have been evaluated on SuperGPQA on BenchLM.

Last updated: April 20, 2026 · BenchLM version SuperGPQA 2025

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.