GLM-5 (Reasoning) Benchmark Scores & Performance

Benchmark analysis of GLM-5 (Reasoning) by Zhipu AI across 14 tests.

Creator

Zhipu AI

Source Type

Open Weight

Reasoning

Reasoning

Context Window

200K

Overall Score

75#14 of 88

Knowledge Benchmarks

MMLU
96
GPQA
94
SuperGPQA
92
OpenBookQA
90

Coding Benchmarks

HumanEval
88

Mathematics Benchmarks

AIME 2023
98
AIME 2024
99
AIME 2025
98
HMMT Feb 2023
94
HMMT Feb 2024
96
HMMT Feb 2025
95
BRUMO 2025
96

Reasoning Benchmarks

SimpleQA
92
MuSR
90

Frequently Asked Questions

How does GLM-5 (Reasoning) perform overall in AI benchmarks?

GLM-5 (Reasoning) ranks #14 out of 88 models with an overall score of 75. It is created by Zhipu AI and features a 200K context window.

Is GLM-5 (Reasoning) good for knowledge and understanding?

GLM-5 (Reasoning) ranks #14 out of 88 models in knowledge and understanding benchmarks with an average score of 93. There are stronger options in this category.

Is GLM-5 (Reasoning) good for coding and programming?

GLM-5 (Reasoning) ranks #14 out of 88 models in coding and programming benchmarks with an average score of 88. There are stronger options in this category.

Is GLM-5 (Reasoning) good for mathematics?

GLM-5 (Reasoning) ranks #14 out of 88 models in mathematics benchmarks with an average score of 96.6. There are stronger options in this category.

Is GLM-5 (Reasoning) good for reasoning and logic?

GLM-5 (Reasoning) ranks #14 out of 88 models in reasoning and logic benchmarks with an average score of 91. There are stronger options in this category.

Is GLM-5 (Reasoning) open source?

Yes, GLM-5 (Reasoning) is an open weight model created by Zhipu AI, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

What is the context window size of GLM-5 (Reasoning)?

GLM-5 (Reasoning) has a context window of 200K tokens, which determines how much text it can process in a single interaction.