Kimi K2.5 vs Trinity-Large-Thinking

Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.

Benchmark data for one or both models is coming soon. This page currently shows metadata and pricing where BenchLM has it, and score-level comparisons will populate as public benchmark results land.
Agentic
Coding
Multimodal & Grounded
Reasoning
Knowledge
Instruction Following
Multilingual
Mathematics

Kimi K2.5· Trinity-Large-Thinking

Quick Verdict

Benchmark data for Kimi K2.5 and Trinity-Large-Thinking is coming soon on BenchLM.

BenchLM has partial data for these models, but not enough overlapping benchmark coverage to produce a fair score-level comparison yet.

Kimi K2.5 is priced at $0.50 input / $2.80 output per 1M tokens, versus $0.25 input / $0.90 output per 1M tokens for Trinity-Large-Thinking. Trinity-Large-Thinking has the larger context window at 512K, compared with 128K for Kimi K2.5.

Operational tradeoffs

Price$0.50 / $2.80$0.25 / $0.90
Speed45 t/sN/A
TTFT2.38sN/A
Context128K512K

Decision framing

BenchLM keeps the benchmark table and the operator tradeoffs on the same page so a better score does not hide a materially slower, pricier, or smaller-context model.

Runtime metrics show N/A when BenchLM does not have a sourced snapshot for that exact model. The scoring rules and freshness policy are documented on the methodology page.

BenchmarkKimi K2.5Trinity-Large-Thinking
Agentic
Terminal-Bench 2.050.8%
BrowseComp60.6%
OSWorld-Verified63.3%
BrowseComp-VL42.9%
OSWorld63.3%
Tau2-Airline80.0%88.0%
Tau2-Telecom95.9%94.7%
PinchBench84.8%91.9%
BFCL v468.3%70.1%
AndroidWorld43.1%
WebVoyager84.3%
Coding
HumanEval99%
SWE-bench Verified76.8%
SWE-bench Verified*70.8%63.2%
LiveCodeBench85%
SWE-bench Pro40%
SWE-Rebench58.5%
React Native Evals74.9%
Multimodal & Grounded
MMMU-Pro78.5%
OfficeQA Pro69%
Design2Code91.3%
Flame-VLM-Code88.8%
Vision2Web33.2%
ImageMining24.4%
MMSearch58.7%
MMSearch-Plus25.6%
SimpleVQA71.5%
Facts-VLM57.8%
V*84.3%
Reasoning
MuSR72%
BBH81%
LongBench v267%
MRCRv270%
Knowledge
MMLU77%
GPQA87.6%
GPQA-D86.9%76.3%
SuperGPQA74%
MMLU-Pro87.1%
MMLU-Pro (Arcee)87.1%83.4%
HLE11%
FrontierScience67%
SimpleQA74%
Instruction Following
IFEval94%
IFBench70.2%52.3%
Multilingual
MGSM83%
MMLU-ProX78%
Mathematics
AIME 202377%
AIME 202479%
AIME 202578%
AIME25 (Arcee)96.3%96.3%
HMMT Feb 202373%
HMMT Feb 202475%
HMMT Feb 202574%
BRUMO 202576%
MATH-50082%
Frequently Asked Questions (3)

Can I compare Kimi K2.5 and Trinity-Large-Thinking on BenchLM yet?

Not fully yet. BenchLM is tracking both models, but the sourced benchmark breakdown for this comparison is still coming soon.

Why does this comparison show “coming soon”?

BenchLM only shows category winners and benchmark-level calls when we have sourced results that can be compared fairly. For these models, the public benchmark coverage is not complete enough yet.

What data is available for Kimi K2.5 and Trinity-Large-Thinking today?

Kimi K2.5: $0.50 input / $2.80 output per 1M tokens Trinity-Large-Thinking: $0.25 input / $0.90 output per 1M tokens Both model pages still include creator, context window, reasoning mode, and other metadata while benchmark coverage fills in.

Last updated: April 1, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.