Skip to main content

Chinese-SimpleQA

A Chinese short-form factuality benchmark reported by DeepSeek for V4 model evaluations.

Benchmark score on Chinese-SimpleQA — April 24, 2026

BenchLM mirrors the published score view for Chinese-SimpleQA. DeepSeek V4 Pro (Max) leads the public snapshot at 84.4% , followed by DeepSeek V4 Flash (Max) (78.9%) and DeepSeek V4 Pro (High) (77.7%). BenchLM does not use these results to rank models overall.

6 modelsKnowledgeCurrentDisplay onlyUpdated April 24, 2026

The published Chinese-SimpleQA snapshot is tightly clustered at the top: DeepSeek V4 Pro (Max) sits at 84.4%, while the third row is only 6.7 points behind. The broader top-10 spread is 12.9 points, so the benchmark still separates strong models even when the leaders cluster.

6 models have been evaluated on Chinese-SimpleQA. The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. Chinese-SimpleQA is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About Chinese-SimpleQA

Year

2026

Tasks

Chinese factual questions

Format

Short-form factual QA

Difficulty

Factual accuracy focused

BenchLM stores Chinese-SimpleQA as a display-only provider-table reference for DeepSeek-V4. It is separate from the English SimpleQA row.

BenchLM freshness & provenance

Version

Chinese-SimpleQA 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (6 models)

1
84.4%
2
78.9%
3
77.7%
4
75.8%
5
73.2%
6
71.5%

FAQ

What does Chinese-SimpleQA measure?

A Chinese short-form factuality benchmark reported by DeepSeek for V4 model evaluations.

Which model scores highest on Chinese-SimpleQA?

DeepSeek V4 Pro (Max) by DeepSeek currently leads with a score of 84.4% on Chinese-SimpleQA.

How many models are evaluated on Chinese-SimpleQA?

6 AI models have been evaluated on Chinese-SimpleQA on BenchLM.

Last updated: April 24, 2026 · BenchLM version Chinese-SimpleQA 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.