A display-only GPQA Diamond reference from provider comparison charts.
BenchLM mirrors the published score view for GPQA-D. Gemini 3.1 Pro leads the public snapshot at 94.3% , followed by Claude Opus 4.7 (94.2%) and GPT-5.4 (92.8%). BenchLM does not use these results to rank models overall.
Gemini 3.1 Pro
Claude Opus 4.7
Anthropic
GPT-5.4
OpenAI
The published GPQA-D snapshot is tightly clustered at the top: Gemini 3.1 Pro sits at 94.3%, while the third row is only 1.5 points behind. The broader top-10 spread is 8.3 points, so many of the published scores sit in a relatively narrow band.
11 models have been evaluated on GPQA-D. The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. GPQA-D is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Graduate-level science questions
Format
Multiple choice questions
Difficulty
Graduate level
BenchLM stores GPQA-D separately from the standardized GPQA row when providers publish exact chart values that should not overwrite the core weighted benchmark.
Version
GPQA-D 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A display-only GPQA Diamond reference from provider comparison charts.
Gemini 3.1 Pro by Google currently leads with a score of 94.3% on GPQA-D.
11 AI models have been evaluated on GPQA-D on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.