Head-to-head comparison across 2benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
Gemini 2.5 Pro
66
Qwen3.5 397B
66
Verified leaderboard positions: Gemini 2.5 Pro unranked · Qwen3.5 397B #11
Treat this as a split decision. Gemini 2.5 Pro makes more sense if coding is the priority or you need the larger 1M context window; Qwen3.5 397B is the better fit if knowledge is the priority or you want the cheaper token bill.
Coding
+3.5 difference
Knowledge
+24.4 difference
Gemini 2.5 Pro
Qwen3.5 397B
$1.25 / $5
$0 / $0
117 t/s
96 t/s
21.19s
2.44s
1M
128K
Treat this as a split decision. Gemini 2.5 Pro makes more sense if coding is the priority or you need the larger 1M context window; Qwen3.5 397B is the better fit if knowledge is the priority or you want the cheaper token bill.
Gemini 2.5 Pro and Qwen3.5 397B finish on the same provisional overall score, so this is less about a single winner and more about where the edge shows up. The provisional headline says tie; the benchmark table is where the real choice happens.
Gemini 2.5 Pro is also the more expensive model on tokens at $1.25 input / $5.00 output per 1M tokens, versus $0.00 input / $0.00 output per 1M tokens for Qwen3.5 397B. That is roughly Infinityx on output cost alone. Gemini 2.5 Pro gives you the larger context window at 1M, compared with 128K for Qwen3.5 397B.
Gemini 2.5 Pro and Qwen3.5 397B are tied on the provisional overall score, so the right pick depends on which category matters most for your use case.
Qwen3.5 397B has the edge for knowledge tasks in this comparison, averaging 65.2 versus 40.8. Inside this category, HLE is the benchmark that creates the most daylight between them.
Gemini 2.5 Pro has the edge for coding in this comparison, averaging 63.8 versus 60.3. Inside this category, SWE-bench Verified is the benchmark that creates the most daylight between them.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.