Claude Opus 4.6 vs Trinity-Large-Thinking

Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.

Benchmark data for one or both models is coming soon. This page currently shows metadata and pricing where BenchLM has it, and score-level comparisons will populate as public benchmark results land.
Agentic
Coding
Multimodal & Grounded
Reasoning
Knowledge
Instruction Following
Multilingual
Mathematics

Claude Opus 4.6· Trinity-Large-Thinking

Quick Verdict

Benchmark data for Claude Opus 4.6 and Trinity-Large-Thinking is coming soon on BenchLM.

BenchLM has partial data for these models, but not enough overlapping benchmark coverage to produce a fair score-level comparison yet.

Claude Opus 4.6 is priced at $15.00 input / $75.00 output per 1M tokens, versus $0.25 input / $0.90 output per 1M tokens for Trinity-Large-Thinking. Claude Opus 4.6 has the larger context window at 1M, compared with 512K for Trinity-Large-Thinking.

Operational tradeoffs

Price$15.00 / $75.00$0.25 / $0.90
Speed40 t/sN/A
TTFT1.78sN/A
Context1M512K

Decision framing

BenchLM keeps the benchmark table and the operator tradeoffs on the same page so a better score does not hide a materially slower, pricier, or smaller-context model.

Runtime metrics show N/A when BenchLM does not have a sourced snapshot for that exact model. The scoring rules and freshness policy are documented on the methodology page.

BenchmarkClaude Opus 4.6Trinity-Large-Thinking
Agentic
Terminal-Bench 2.065.4%
BrowseComp84%
BrowseComp-VL35.9%
OSWorld72.2%
Tau2-Airline82.0%88.0%
Tau2-Telecom92.1%94.7%
PinchBench93.3%91.9%
BFCL v477.0%70.1%
AndroidWorld62.0%
WebVoyager88.0%
Coding
SWE-bench Verified80.8%
SWE-bench Verified*75.6%63.2%
LiveCodeBench76%
FLTEval39.6%
SWE-Rebench65.3%
React Native Evals84.4%
Multimodal & Grounded
MMMU-Pro77.3%
OfficeQA Pro94%
Design2Code77.3%
Flame-VLM-Code98.8%
Vision2Web43.5%
MMSearch63.8%
MMSearch-Plus25.6%
SimpleVQA63.2%
V*66.5%
Reasoning
MuSR93%
BBH94%
LongBench v292%
MRCRv276%
ARC-AGI-268.8%
Knowledge
MMLU99%
GPQA91.3%
GPQA-D89.2%76.3%
SuperGPQA95%
MMLU-Pro82%
MMLU-Pro (Arcee)89.1%83.4%
HLE53%
FrontierScience88%
SimpleQA72%
Instruction Following
IFBench53.1%52.3%
Multilingual
MGSM96%
Mathematics
AIME 202399%
AIME 202499%
AIME 202598%
AIME25 (Arcee)99.8%96.3%
HMMT Feb 202395%
HMMT Feb 202497%
HMMT Feb 202596%
BRUMO 202596%
MATH-50098%
Frequently Asked Questions (3)

Can I compare Claude Opus 4.6 and Trinity-Large-Thinking on BenchLM yet?

Not fully yet. BenchLM is tracking both models, but the sourced benchmark breakdown for this comparison is still coming soon.

Why does this comparison show “coming soon”?

BenchLM only shows category winners and benchmark-level calls when we have sourced results that can be compared fairly. For these models, the public benchmark coverage is not complete enough yet.

What data is available for Claude Opus 4.6 and Trinity-Large-Thinking today?

Claude Opus 4.6: $15.00 input / $75.00 output per 1M tokens Trinity-Large-Thinking: $0.25 input / $0.90 output per 1M tokens Both model pages still include creator, context window, reasoning mode, and other metadata while benchmark coverage fills in.

Last updated: April 1, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.