A commonsense natural-language inference benchmark reported in DeepSeek-V4 base-model evaluations.
BenchLM mirrors the published score view for HellaSwag. DeepSeek V4 Pro Base leads the public snapshot at 88.0% , followed by DeepSeek V4 Flash Base (85.7%). BenchLM does not use these results to rank models overall.
DeepSeek V4 Pro Base
DeepSeek
DeepSeek V4 Flash Base
DeepSeek
Year
2026
Tasks
Commonsense completion questions
Format
Exact match
Difficulty
Commonsense reasoning
BenchLM stores HellaSwag as a display-only provider-table row when exact values are published in DeepSeek-V4 evaluations.
Version
HellaSwag 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A commonsense natural-language inference benchmark reported in DeepSeek-V4 base-model evaluations.
DeepSeek V4 Pro Base by DeepSeek currently leads with a score of 88.0% on HellaSwag.
2 AI models have been evaluated on HellaSwag on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.