GLM-4.5-Air vs Ministral 3 8B

Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.

GLM-4.5-Air has the cleaner overall profile here, landing at 35 versus 32. It is a real lead, but still close enough that category-level strengths matter more than the headline number.

GLM-4.5-Air's sharpest advantage is in multimodal & grounded, where it averages 39.6 against 32.4. The single biggest benchmark swing on the page is MRCRv2, 51 to 41. Ministral 3 8B does hit back in multilingual, so the answer changes if that is the part of the workload you care about most.

Quick Verdict

Pick GLM-4.5-Air if you want the stronger benchmark profile. Ministral 3 8B only becomes the better choice if multilingual is the priority.

Agentic

GLM-4.5-Air

GLM-4.5-Air

30.3

Ministral 3 8B

28.9

28
Terminal-Bench 2.0
26
37
BrowseComp
36
28
OSWorld-Verified
27

Coding

GLM-4.5-Air

GLM-4.5-Air

14.6

Ministral 3 8B

14.2

27
HumanEval
23
15
SWE-bench Verified
16
15
LiveCodeBench
15
14
SWE-bench Pro
13

Multimodal & Grounded

GLM-4.5-Air

GLM-4.5-Air

39.6

Ministral 3 8B

32.4

36
MMMU-Pro
27
44
OfficeQA Pro
39

Reasoning

GLM-4.5-Air

GLM-4.5-Air

42.6

Ministral 3 8B

36.1

33
SimpleQA
28
31
MuSR
26
63
BBH
63
47
LongBench v2
38
51
MRCRv2
41

Knowledge

GLM-4.5-Air

GLM-4.5-Air

31.1

Ministral 3 8B

28

35
MMLU
28
34
GPQA
27
32
SuperGPQA
25
30
OpenBookQA
23
51
MMLU-Pro
52
4
HLE
3
37
FrontierScience
32

Instruction Following

Ministral 3 8B

GLM-4.5-Air

68

Ministral 3 8B

69

68
IFEval
69

Multilingual

Ministral 3 8B

GLM-4.5-Air

59.1

Ministral 3 8B

61.7

63
MGSM
63
57
MMLU-ProX
61

Mathematics

GLM-4.5-Air

GLM-4.5-Air

44.4

Ministral 3 8B

43.3

35
AIME 2023
30
37
AIME 2024
32
36
AIME 2025
33
31
HMMT Feb 2023
26
33
HMMT Feb 2024
28
32
HMMT Feb 2025
27
34
BRUMO 2025
29
57
MATH-500
60

Frequently Asked Questions

Which is better, GLM-4.5-Air or Ministral 3 8B?

GLM-4.5-Air is ahead overall, 35 to 32. The biggest single separator in this matchup is MRCRv2, where the scores are 51 and 41.

Which is better for knowledge tasks, GLM-4.5-Air or Ministral 3 8B?

GLM-4.5-Air has the edge for knowledge tasks in this comparison, averaging 31.1 versus 28. Inside this category, MMLU is the benchmark that creates the most daylight between them.

Which is better for coding, GLM-4.5-Air or Ministral 3 8B?

GLM-4.5-Air has the edge for coding in this comparison, averaging 14.6 versus 14.2. Inside this category, HumanEval is the benchmark that creates the most daylight between them.

Which is better for math, GLM-4.5-Air or Ministral 3 8B?

GLM-4.5-Air has the edge for math in this comparison, averaging 44.4 versus 43.3. Inside this category, AIME 2023 is the benchmark that creates the most daylight between them.

Which is better for reasoning, GLM-4.5-Air or Ministral 3 8B?

GLM-4.5-Air has the edge for reasoning in this comparison, averaging 42.6 versus 36.1. Inside this category, MRCRv2 is the benchmark that creates the most daylight between them.

Which is better for agentic tasks, GLM-4.5-Air or Ministral 3 8B?

GLM-4.5-Air has the edge for agentic tasks in this comparison, averaging 30.3 versus 28.9. Inside this category, Terminal-Bench 2.0 is the benchmark that creates the most daylight between them.

Which is better for multimodal and grounded tasks, GLM-4.5-Air or Ministral 3 8B?

GLM-4.5-Air has the edge for multimodal and grounded tasks in this comparison, averaging 39.6 versus 32.4. Inside this category, MMMU-Pro is the benchmark that creates the most daylight between them.

Which is better for instruction following, GLM-4.5-Air or Ministral 3 8B?

Ministral 3 8B has the edge for instruction following in this comparison, averaging 69 versus 68. Inside this category, IFEval is the benchmark that creates the most daylight between them.

Which is better for multilingual tasks, GLM-4.5-Air or Ministral 3 8B?

Ministral 3 8B has the edge for multilingual tasks in this comparison, averaging 61.7 versus 59.1. Inside this category, MMLU-ProX is the benchmark that creates the most daylight between them.

Last updated: March 12, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.