DeepSeek LLM 2.0 Benchmark Scores & Performance

Benchmark analysis of DeepSeek LLM 2.0 by DeepSeek across 14 tests.

Creator

DeepSeek

Source Type

Open Weight

Reasoning

Non-Reasoning

Context Window

128K

Overall Score

63#37 of 88

Knowledge Benchmarks

MMLU
79
GPQA
78
SuperGPQA
76
OpenBookQA
74

Coding Benchmarks

HumanEval
73

Mathematics Benchmarks

AIME 2023
80
AIME 2024
82
AIME 2025
81
HMMT Feb 2023
76
HMMT Feb 2024
78
HMMT Feb 2025
77
BRUMO 2025
79

Reasoning Benchmarks

SimpleQA
77
MuSR
75

Frequently Asked Questions

How does DeepSeek LLM 2.0 perform overall in AI benchmarks?

DeepSeek LLM 2.0 ranks #37 out of 88 models with an overall score of 63. It is created by DeepSeek and features a 128K context window.

Is DeepSeek LLM 2.0 good for knowledge and understanding?

DeepSeek LLM 2.0 ranks #37 out of 88 models in knowledge and understanding benchmarks with an average score of 76.8. There are stronger options in this category.

Is DeepSeek LLM 2.0 good for coding and programming?

DeepSeek LLM 2.0 ranks #36 out of 88 models in coding and programming benchmarks with an average score of 73. There are stronger options in this category.

Is DeepSeek LLM 2.0 good for mathematics?

DeepSeek LLM 2.0 ranks #36 out of 88 models in mathematics benchmarks with an average score of 79. There are stronger options in this category.

Is DeepSeek LLM 2.0 good for reasoning and logic?

DeepSeek LLM 2.0 ranks #36 out of 88 models in reasoning and logic benchmarks with an average score of 76. There are stronger options in this category.

Is DeepSeek LLM 2.0 open source?

Yes, DeepSeek LLM 2.0 is an open weight model created by DeepSeek, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

What is the context window size of DeepSeek LLM 2.0?

DeepSeek LLM 2.0 has a context window of 128K tokens, which determines how much text it can process in a single interaction.