Head-to-head comparison across 6benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
GLM-5
67
Qwen3.5-27B
63
Verified leaderboard positions: GLM-5 #17 · Qwen3.5-27B #16
Pick GLM-5 if you want the stronger benchmark profile. Qwen3.5-27B only becomes the better choice if knowledge is the priority or you want the cheaper token bill.
Agentic
+4.6 difference
Coding
+0.2 difference
Reasoning
+0.2 difference
Knowledge
+9.9 difference
Multilingual
+0.9 difference
Inst. Following
+2.4 difference
GLM-5
Qwen3.5-27B
$1 / $3.2
$0 / $0
74 t/s
N/A
1.64s
N/A
200K
262K
Pick GLM-5 if you want the stronger benchmark profile. Qwen3.5-27B only becomes the better choice if knowledge is the priority or you want the cheaper token bill.
GLM-5 is clearly ahead on the provisional aggregate, 67 to 63. The gap is large enough that you do not need to squint at the spreadsheet to see the difference.
GLM-5's sharpest advantage is in agentic, where it averages 56.2 against 51.6. The single biggest benchmark swing on the page is Terminal-Bench 2.0, 56.2% to 41.6%. Qwen3.5-27B does hit back in knowledge, so the answer changes if that is the part of the workload you care about most.
GLM-5 is also the more expensive model on tokens at $1.00 input / $3.20 output per 1M tokens, versus $0.00 input / $0.00 output per 1M tokens for Qwen3.5-27B. That is roughly Infinityx on output cost alone. Qwen3.5-27B is the reasoning model in the pair, while GLM-5 is not. That usually helps on harder chain-of-thought-heavy tests, but it can also mean more latency and more token spend in real use. Qwen3.5-27B gives you the larger context window at 262K, compared with 200K for GLM-5.
GLM-5 is ahead on BenchLM's provisional leaderboard, 67 to 63. The biggest single separator in this matchup is Terminal-Bench 2.0, where the scores are 56.2% and 41.6%.
Qwen3.5-27B has the edge for knowledge tasks in this comparison, averaging 80.6 versus 70.7. Inside this category, SuperGPQA is the benchmark that creates the most daylight between them.
GLM-5 has the edge for coding in this comparison, averaging 63.2 versus 63. Inside this category, SWE-bench Verified is the benchmark that creates the most daylight between them.
GLM-5 has the edge for reasoning in this comparison, averaging 60.8 versus 60.6. Inside this category, LongBench v2 is the benchmark that creates the most daylight between them.
GLM-5 has the edge for agentic tasks in this comparison, averaging 56.2 versus 51.6. Inside this category, Terminal-Bench 2.0 is the benchmark that creates the most daylight between them.
Qwen3.5-27B has the edge for instruction following in this comparison, averaging 95 versus 92.6. Inside this category, IFEval is the benchmark that creates the most daylight between them.
GLM-5 has the edge for multilingual tasks in this comparison, averaging 83.1 versus 82.2. Inside this category, MMLU-ProX is the benchmark that creates the most daylight between them.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.