Skip to main content

DeepSeek V3.2

DeepSeekCurrentReleased Dec 1, 2025
Overall Score
Est. 60Prov. #44 of 110
Arena Elo
1424
Categories Ranked
8of 8
Price (1M tokens)
$0 in / $0 out
Speed
35tok/s
Context
128K
Open WeightNon-Reasoning
Confidence
base

According to BenchLM.ai, DeepSeek V3.2 ranks #44 out of 110 models on the provisional leaderboard with an overall score of 60/100. It does not yet have enough sourced coverage for BenchLM's verified leaderboard. While not a frontier model, it offers specific advantages depending on the use case.

DeepSeek V3.2 is a open weight model with a 128K token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.

DeepSeek V3.2 sits inside the DeepSeek V3.2 family alongside DeepSeek V3.2 (Thinking). This profile currently has 2 of 152 tracked benchmarks. BenchLM only exposes non-generated benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.

Its strongest category is Multilingual (#33), while its weakest is Multimodal & Grounded (#70). This performance profile makes it a well-rounded choice across a range of tasks.

Ranking Distribution

Category rank across 8 benchmark categories — sorted by best rank

Category Performance

Scores across all benchmark categories (0-100 scale)

Category Breakdown

Agentic

#44
55.7/ 100
Weight: 22%1 benchmark
Terminal-Bench 2.0BrowseCompOSWorld-VerifiedGAIATAU-benchWebArena

Coding

#37
60.2/ 100
Weight: 20%1 benchmark
SWE-bench VerifiedLiveCodeBenchSWE-bench ProSWE-RebenchSciCode

Reasoning

#56
48.4/ 100
Weight: 17%0 benchmarks
MuSRLongBench v2MRCRv2ARC-AGI-2

Knowledge

#41
63.0/ 100
Weight: 12%0 benchmarks
GPQASuperGPQAMMLU-ProHLEFrontierScienceSimpleQA

Math

#33
70.9/ 100
Weight: 5%0 benchmarks
AIME 2025BRUMO 2025MATH-500FrontierMath

Multilingual

#33
69.2/ 100
Weight: 7%0 benchmarks
MGSMMMLU-ProX

Multimodal

#70
50.4/ 100
Weight: 12%0 benchmarks
MMMU-ProOfficeQA Pro

Inst. Following

#60
60.6/ 100
Weight: 5%0 benchmarks
IFEvalIFBench

Chatbot Arena Performance

Text Overall1424CI: ±3.842,815 votes
Coding1468CI: ±6.99,144 votes
Math1428CI: ±11.52,766 votes
Instruction Following1419CI: ±6.211,242 votes
Creative Writing1399CI: ±8.36,011 votes
Multi-turn1427CI: ±7.67,447 votes
Hard Prompts1447CI: ±4.922,931 votes
Hard Prompts (English)1455CI: ±6.510,819 votes
Longer Query1440CI: ±6.410,659 votes

Benchmark Details

Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.

DeepSeek V3.2 Family

Base entry

Frequently Asked Questions

How does DeepSeek V3.2 perform overall in AI benchmarks?

DeepSeek V3.2 currently ranks #44 out of 110 models on BenchLM's provisional leaderboard with an overall score of 60 (estimated). It is created by DeepSeek and features a 128K context window.

Is DeepSeek V3.2 good for coding and programming?

DeepSeek V3.2 ranks #37 out of 110 models in coding and programming benchmarks with an average score of 60.2. There are stronger options in this category.

Is DeepSeek V3.2 good for agentic tool use and computer tasks?

DeepSeek V3.2 ranks #44 out of 110 models in agentic tool use and computer tasks benchmarks with an average score of 55.7. There are stronger options in this category.

Is DeepSeek V3.2 open source?

Yes, DeepSeek V3.2 is an open weight model created by DeepSeek, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

Which sibling models are related to DeepSeek V3.2?

DeepSeek V3.2 belongs to the DeepSeek V3.2 family. Related variants on BenchLM include DeepSeek V3.2 (Thinking).

Does DeepSeek V3.2 have full benchmark coverage on BenchLM?

Not yet. DeepSeek V3.2 currently has 2 published benchmark scores out of the 152 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.

What is the context window size of DeepSeek V3.2?

DeepSeek V3.2 has a context window of 128K, which determines how much text it can process in a single interaction.

Last updated: April 21, 2026 · Runtime metrics stay blank until BenchLM has a sourced snapshot.

Don't miss the next GPT moment

Which models moved up, what’s new, and what it costs. One email a week, 3-min read.

Free. One email per week.