GLM-4.5 Benchmark Scores & Performance

Benchmark analysis of GLM-4.5 by Tsinghua across 14 tests.

Creator

Tsinghua

Source Type

Proprietary

Reasoning

Non-Reasoning

Context Window

128K

Overall Score

28#80 of 88

Knowledge Benchmarks

MMLU
37
GPQA
36
SuperGPQA
34
OpenBookQA
32

Coding Benchmarks

HumanEval
29

Mathematics Benchmarks

AIME 2023
37
AIME 2024
39
AIME 2025
38
HMMT Feb 2023
33
HMMT Feb 2024
35
HMMT Feb 2025
34
BRUMO 2025
36

Reasoning Benchmarks

SimpleQA
35
MuSR
33

Frequently Asked Questions

How does GLM-4.5 perform overall in AI benchmarks?

GLM-4.5 ranks #80 out of 88 models with an overall score of 28. It is created by Tsinghua and features a 128K context window.

Is GLM-4.5 good for knowledge and understanding?

GLM-4.5 ranks #80 out of 88 models in knowledge and understanding benchmarks with an average score of 34.8. There are stronger options in this category.

Is GLM-4.5 good for coding and programming?

GLM-4.5 ranks #80 out of 88 models in coding and programming benchmarks with an average score of 29. There are stronger options in this category.

Is GLM-4.5 good for mathematics?

GLM-4.5 ranks #80 out of 88 models in mathematics benchmarks with an average score of 36. There are stronger options in this category.

Is GLM-4.5 good for reasoning and logic?

GLM-4.5 ranks #80 out of 88 models in reasoning and logic benchmarks with an average score of 34. There are stronger options in this category.

What is the context window size of GLM-4.5?

GLM-4.5 has a context window of 128K tokens, which determines how much text it can process in a single interaction.