An evaluation focused on professional domain expertise and task delivery quality in office-style knowledge work.
BenchLM mirrors the published score view for GDPval-AA. GPT-5.4 leads the public snapshot at 1672 , followed by Claude Opus 4.6 (1606) and MiniMax M2.7 (1495). BenchLM does not use these results to rank models overall.
GPT-5.4
OpenAI
Claude Opus 4.6
Anthropic
MiniMax M2.7
MiniMax
The published GDPval-AA snapshot is tightly clustered at the top: GPT-5.4 sits at 1672, while the third row is only 177 points behind. The broader top-10 spread is 617 points, so the benchmark still separates strong models even when the leaders cluster.
6 models have been evaluated on GDPval-AA. The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. GDPval-AA is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Professional office delivery
Format
ELO-style office benchmark
Difficulty
Professional knowledge work
MiniMax describes GDPval-AA as an office-domain evaluation for professional expertise and delivery quality. BenchLM stores the published ELO-style score as a display-only benchmark reference.
Version
GDPval-AA 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
An evaluation focused on professional domain expertise and task delivery quality in office-style knowledge work.
GPT-5.4 by OpenAI currently leads with a score of 1672 on GDPval-AA.
6 AI models have been evaluated on GDPval-AA on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.