An agentic real-world work-task evaluation reported as an Elo score in DeepSeek-V4 thinking-mode evaluations.
BenchLM mirrors the published score view for GDPval-AA. MiMo-V2.5-Pro leads the public snapshot at 1581 , followed by DeepSeek V4 Pro (Max) (1554) and DeepSeek V4 Flash (Max) (1395). BenchLM does not use these results to rank models overall.
MiMo-V2.5-Pro
Xiaomi
DeepSeek V4 Pro (Max)
DeepSeek
DeepSeek V4 Flash (Max)
DeepSeek
The published GDPval-AA snapshot is tightly clustered at the top: MiMo-V2.5-Pro sits at 1581, while the third row is only 186 points behind. The broader top-10 spread is 798 points, so the benchmark still separates strong models even when the leaders cluster.
4 models have been evaluated on GDPval-AA. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. GDPval-AA is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Agentic real-world work tasks
Format
Elo
Difficulty
Professional agentic workflows
BenchLM stores GDPval-AA as a display-only provider-table row for DeepSeek-V4 because the source reports an Elo score rather than a 0-100 percentage.
Version
GDPval-AA 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
An agentic real-world work-task evaluation reported as an Elo score in DeepSeek-V4 thinking-mode evaluations.
MiMo-V2.5-Pro by Xiaomi currently leads with a score of 1581 on GDPval-AA.
4 AI models have been evaluated on GDPval-AA on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.