Skip to main content

HellaSwag

A commonsense natural-language inference benchmark reported in DeepSeek-V4 base-model evaluations.

Benchmark score on HellaSwag — April 24, 2026

BenchLM mirrors the published score view for HellaSwag. DeepSeek V4 Pro Base leads the public snapshot at 88.0% , followed by DeepSeek V4 Flash Base (85.7%). BenchLM does not use these results to rank models overall.

2 modelsReasoningCurrentDisplay onlyUpdated April 24, 2026

About HellaSwag

Year

2026

Tasks

Commonsense completion questions

Format

Exact match

Difficulty

Commonsense reasoning

BenchLM stores HellaSwag as a display-only provider-table row when exact values are published in DeepSeek-V4 evaluations.

BenchLM freshness & provenance

Version

HellaSwag 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (2 models)

1
88.0%
2
85.7%

FAQ

What does HellaSwag measure?

A commonsense natural-language inference benchmark reported in DeepSeek-V4 base-model evaluations.

Which model scores highest on HellaSwag?

DeepSeek V4 Pro Base by DeepSeek currently leads with a score of 88.0% on HellaSwag.

How many models are evaluated on HellaSwag?

2 AI models have been evaluated on HellaSwag on BenchLM.

Compare Top Models on HellaSwag

Last updated: April 24, 2026 · BenchLM version HellaSwag 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.