A Chinese short-form factuality benchmark reported by DeepSeek for V4 model evaluations.
BenchLM mirrors the published score view for Chinese-SimpleQA. DeepSeek V4 Pro (Max) leads the public snapshot at 84.4% , followed by DeepSeek V4 Flash (Max) (78.9%) and DeepSeek V4 Pro (High) (77.7%). BenchLM does not use these results to rank models overall.
DeepSeek V4 Pro (Max)
DeepSeek
DeepSeek V4 Flash (Max)
DeepSeek
DeepSeek V4 Pro (High)
DeepSeek
The published Chinese-SimpleQA snapshot is tightly clustered at the top: DeepSeek V4 Pro (Max) sits at 84.4%, while the third row is only 6.7 points behind. The broader top-10 spread is 12.9 points, so the benchmark still separates strong models even when the leaders cluster.
6 models have been evaluated on Chinese-SimpleQA. The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. Chinese-SimpleQA is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
Chinese factual questions
Format
Short-form factual QA
Difficulty
Factual accuracy focused
BenchLM stores Chinese-SimpleQA as a display-only provider-table reference for DeepSeek-V4. It is separate from the English SimpleQA row.
Version
Chinese-SimpleQA 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A Chinese short-form factuality benchmark reported by DeepSeek for V4 model evaluations.
DeepSeek V4 Pro (Max) by DeepSeek currently leads with a score of 84.4% on Chinese-SimpleQA.
6 AI models have been evaluated on Chinese-SimpleQA on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.