Head-to-head comparison across 7benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
Kimi K2.5
68
Qwen3.5 397B
66
Verified leaderboard positions: Kimi K2.5 #9 · Qwen3.5 397B #10
Pick Kimi K2.5 if you want the stronger benchmark profile. Qwen3.5 397B only becomes the better choice if multilingual is the priority or you want the cheaper token bill.
Agentic
+1.6 difference
Coding
+3.9 difference
Reasoning
+2.2 difference
Knowledge
+0.1 difference
Multilingual
+2.4 difference
Multimodal
+0.5 difference
Inst. Following
+1.3 difference
Kimi K2.5
Qwen3.5 397B
$0.5 / $2.8
$0 / $0
45 t/s
96 t/s
2.38s
2.44s
256K
128K
Pick Kimi K2.5 if you want the stronger benchmark profile. Qwen3.5 397B only becomes the better choice if multilingual is the priority or you want the cheaper token bill.
Kimi K2.5 has the cleaner provisional overall profile here, landing at 68 versus 66. It is a real lead, but still close enough that category-level strengths matter more than the headline number.
Kimi K2.5's sharpest advantage is in coding, where it averages 64.2 against 60.3. The single biggest benchmark swing on the page is MMLU-ProX, 82.3% to 84.7%. Qwen3.5 397B does hit back in multilingual, so the answer changes if that is the part of the workload you care about most.
Kimi K2.5 is also the more expensive model on tokens at $0.50 input / $2.80 output per 1M tokens, versus $0.00 input / $0.00 output per 1M tokens for Qwen3.5 397B. That is roughly Infinityx on output cost alone. Kimi K2.5 gives you the larger context window at 256K, compared with 128K for Qwen3.5 397B.
Kimi K2.5 is ahead on BenchLM's provisional leaderboard, 68 to 66. The biggest single separator in this matchup is MMLU-ProX, where the scores are 82.3% and 84.7%.
Qwen3.5 397B has the edge for knowledge tasks in this comparison, averaging 65.2 versus 65.1. Inside this category, HLE is the benchmark that creates the most daylight between them.
Kimi K2.5 has the edge for coding in this comparison, averaging 64.2 versus 60.3. Inside this category, LiveCodeBench v6 is the benchmark that creates the most daylight between them.
Qwen3.5 397B has the edge for reasoning in this comparison, averaging 63.2 versus 61. Inside this category, LongBench v2 is the benchmark that creates the most daylight between them.
Qwen3.5 397B has the edge for agentic tasks in this comparison, averaging 56.2 versus 54.6. Inside this category, DeepPlanning is the benchmark that creates the most daylight between them.
Qwen3.5 397B has the edge for multimodal and grounded tasks in this comparison, averaging 79 versus 78.5. Inside this category, VideoMMMU is the benchmark that creates the most daylight between them.
Kimi K2.5 has the edge for instruction following in this comparison, averaging 93.9 versus 92.6. Inside this category, IFEval is the benchmark that creates the most daylight between them.
Qwen3.5 397B has the edge for multilingual tasks in this comparison, averaging 84.7 versus 82.3. Inside this category, NOVA-63 is the benchmark that creates the most daylight between them.
Estimates at 50,000 req/day · 1000 tokens/req average.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.