A reading and trivia question-answering benchmark reported in DeepSeek-V4 base-model evaluations.
BenchLM mirrors the published score view for TriviaQA. DeepSeek V4 Pro Base leads the public snapshot at 85.6% , followed by DeepSeek V4 Flash Base (82.8%). BenchLM does not use these results to rank models overall.
DeepSeek V4 Pro Base
DeepSeek
DeepSeek V4 Flash Base
DeepSeek
Year
2026
Tasks
Trivia and reading-comprehension QA
Format
Exact match
Difficulty
General factual QA
BenchLM stores TriviaQA as a display-only provider-table row when exact values are published in DeepSeek-V4 evaluations.
Version
TriviaQA 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A reading and trivia question-answering benchmark reported in DeepSeek-V4 base-model evaluations.
DeepSeek V4 Pro Base by DeepSeek currently leads with a score of 85.6% on TriviaQA.
2 AI models have been evaluated on TriviaQA on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.