A display-only GPQA Diamond reference from provider comparison charts.
As of March 2026, Claude Opus 4.6 leads the GPQA-D leaderboard with 89.2% , followed by Kimi K2.5 (86.9%) and MiniMax M2.7 (86.2%).
Claude Opus 4.6
Anthropic
Kimi K2.5
Moonshot AI
MiniMax M2.7
MiniMax
According to BenchLM.ai, Claude Opus 4.6 leads the GPQA-D benchmark with a score of 89.2%, followed by Kimi K2.5 (86.9%) and MiniMax M2.7 (86.2%). The top models are clustered within 3.0 points, suggesting this benchmark is nearing saturation for frontier models.
5 models have been evaluated on GPQA-D. The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. GPQA-D is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Graduate-level science questions
Format
Multiple choice questions
Difficulty
Graduate level
BenchLM stores GPQA-D separately from the standardized GPQA row when providers publish exact chart values that should not overwrite the core weighted benchmark.
Trinity-Large-Thinking: Scaling an Open Source Frontier AgentVersion
GPQA-D 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A display-only GPQA Diamond reference from provider comparison charts.
Claude Opus 4.6 by Anthropic currently leads with a score of 89.2% on GPQA-D.
5 AI models have been evaluated on GPQA-D on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.