DeepSeek V3.2 vs Trinity-Large-Thinking

Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.

Benchmark data for one or both models is coming soon. This page currently shows metadata and pricing where BenchLM has it, and score-level comparisons will populate as public benchmark results land.
Agentic
Coding
Multimodal & Grounded
Reasoning
Knowledge
Instruction Following
Multilingual
Mathematics

DeepSeek V3.2· Trinity-Large-Thinking

Quick Verdict

Benchmark data for DeepSeek V3.2 and Trinity-Large-Thinking is coming soon on BenchLM.

BenchLM has partial data for these models, but not enough overlapping benchmark coverage to produce a fair score-level comparison yet.

Trinity-Large-Thinking is priced at $0.25 input / $0.90 output per 1M tokens, versus $0.00 input / $0.00 output per 1M tokens for DeepSeek V3.2. Trinity-Large-Thinking has the larger context window at 512K, compared with 128K for DeepSeek V3.2.

Operational tradeoffs

PriceFree*$0.25 / $0.90
Speed35 t/sN/A
TTFT3.75sN/A
Context128K512K

Decision framing

BenchLM keeps the benchmark table and the operator tradeoffs on the same page so a better score does not hide a materially slower, pricier, or smaller-context model.

Runtime metrics show N/A when BenchLM does not have a sourced snapshot for that exact model. The scoring rules and freshness policy are documented on the methodology page.

BenchmarkDeepSeek V3.2Trinity-Large-Thinking
Agentic
Terminal-Bench 2.060%
BrowseComp62%
OSWorld-Verified55%
Tau2-Airline88.0%
Tau2-Telecom94.7%
PinchBench91.9%
BFCL v470.1%
Coding
HumanEval76%
SWE-bench Verified45%
LiveCodeBench39%
SWE-bench Pro47%
SWE-Rebench60.9%
React Native Evals69%
SWE-bench Verified*63.2%
Multimodal & Grounded
MMMU-Pro61%
OfficeQA Pro72%
Reasoning
MuSR79%
BBH81%
LongBench v269%
MRCRv270%
ARC-AGI-24%
Knowledge
MMLU84%
GPQA83%
SuperGPQA81%
MMLU-Pro73%
HLE11%
FrontierScience72%
SimpleQA81%
GPQA-D76.3%
MMLU-Pro (Arcee)83.4%
Instruction Following
IFEval85%
IFBench52.3%
Multilingual
MGSM84%
MMLU-ProX81%
Mathematics
AIME 202384%
AIME 202486%
AIME 202585%
HMMT Feb 202380%
HMMT Feb 202482%
HMMT Feb 202581%
BRUMO 202583%
MATH-50081%
AIME25 (Arcee)96.3%
Frequently Asked Questions (3)

Can I compare DeepSeek V3.2 and Trinity-Large-Thinking on BenchLM yet?

Not fully yet. BenchLM is tracking both models, but the sourced benchmark breakdown for this comparison is still coming soon.

Why does this comparison show “coming soon”?

BenchLM only shows category winners and benchmark-level calls when we have sourced results that can be compared fairly. For these models, the public benchmark coverage is not complete enough yet.

What data is available for DeepSeek V3.2 and Trinity-Large-Thinking today?

DeepSeek V3.2: $0.00 input / $0.00 output per 1M tokens Trinity-Large-Thinking: $0.25 input / $0.90 output per 1M tokens Both model pages still include creator, context window, reasoning mode, and other metadata while benchmark coverage fills in.

Last updated: April 1, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.