Skip to main content

DeepSeek V3.2 (Thinking)

DeepSeekCurrentReleased Dec 1, 2025
Overall Score
Coming soon
Arena Elo
1423
Categories Ranked
8of 8
Price (1M tokens)
$0 in / $0 out
Speed
N/A
Context
128K
Open WeightReasoning
Confidence
reasoning

BenchLM is tracking DeepSeek V3.2 (Thinking), but sourced benchmark results are not published on the site yet. This page currently shows the model metadata we can verify now, and score-level benchmark coverage will appear once public evaluations land.

DeepSeek V3.2 (Thinking) is a open weight model with a 128K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.

DeepSeek V3.2 (Thinking) sits inside the DeepSeek V3.2 family alongside DeepSeek V3.2. This profile currently has 0 sourced benchmarks on BenchLM, so the benchmark sections below are intentionally marked as coming soon.

Its strongest category is Agentic (#23), while its weakest is Instruction Following (#59). This performance profile makes it particularly useful for coding agents, browser research, and computer-use workflows.

Ranking Distribution

Category rank across 8 benchmark categories — sorted by best rank

Category Performance

Scores across all benchmark categories (0-100 scale)

Category Breakdown

Agentic

#23
67.5/ 100
Weight: 22%0 benchmarks
Terminal-Bench 2.0BrowseCompOSWorld-VerifiedGAIATAU-benchWebArena

Coding

#40
58.0/ 100
Weight: 20%0 benchmarks
SWE-bench VerifiedLiveCodeBenchSWE-bench ProSWE-RebenchSciCode

Reasoning

#46
55.9/ 100
Weight: 17%0 benchmarks
MuSRLongBench v2MRCRv2ARC-AGI-2

Knowledge

#32
69.9/ 100
Weight: 12%0 benchmarks
GPQASuperGPQAMMLU-ProHLEFrontierScienceSimpleQA

Math

#50
52.3/ 100
Weight: 5%0 benchmarks
AIME 2025BRUMO 2025MATH-500FrontierMath

Multilingual

#40
65.4/ 100
Weight: 7%0 benchmarks
MGSMMMLU-ProX

Multimodal

#54
59.3/ 100
Weight: 12%0 benchmarks
MMMU-ProOfficeQA Pro

Inst. Following

#59
61.0/ 100
Weight: 5%0 benchmarks
IFEvalIFBench

Chatbot Arena Performance

Text Overall1423CI: ±3.837,197 votes
Coding1474CI: ±7.37,364 votes
Math1424CI: ±12.42,286 votes
Instruction Following1418CI: ±6.49,698 votes
Creative Writing1392CI: ±8.35,604 votes
Multi-turn1426CI: ±8.06,187 votes
Hard Prompts1446CI: ±5.019,701 votes
Hard Prompts (English)1462CI: ±6.79,216 votes
Longer Query1441CI: ±6.79,112 votes

Benchmark Details

Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.

DeepSeek V3.2 Family

Reasoning

Canonical Entry

DeepSeek V3.2

Frequently Asked Questions

How does DeepSeek V3.2 (Thinking) perform overall in AI benchmarks?

BenchLM is tracking DeepSeek V3.2 (Thinking), but sourced benchmark coverage is still coming soon. We currently list its creator, model type, and context window while we wait for public benchmark results.

Is DeepSeek V3.2 (Thinking) open source?

Yes, DeepSeek V3.2 (Thinking) is an open weight model created by DeepSeek, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

Which sibling models are related to DeepSeek V3.2 (Thinking)?

DeepSeek V3.2 (Thinking) belongs to the DeepSeek V3.2 family. Related variants on BenchLM include DeepSeek V3.2.

Does DeepSeek V3.2 (Thinking) have full benchmark coverage on BenchLM?

Not yet. DeepSeek V3.2 (Thinking) currently has 0 published benchmark scores out of the 152 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.

What is the context window size of DeepSeek V3.2 (Thinking)?

DeepSeek V3.2 (Thinking) has a context window of 128K, which determines how much text it can process in a single interaction.

Last updated: April 20, 2026 · Runtime metrics stay blank until BenchLM has a sourced snapshot.

Don't miss the next GPT moment

Which models moved up, what’s new, and what it costs. One email a week, 3-min read.

Free. One email per week.