GLM-4.7-Flash Benchmark Scores & Performance

Benchmark analysis of GLM-4.7-Flash by Zhipu AI across 14 tests.

Creator

Zhipu AI

Source Type

Open Weight

Reasoning

Reasoning

Context Window

200K

Overall Score

56#50 of 88

Knowledge Benchmarks

MMLU
66
GPQA
65
SuperGPQA
63
OpenBookQA
61

Coding Benchmarks

HumanEval
58

Mathematics Benchmarks

AIME 2023
66
AIME 2024
68
AIME 2025
67
HMMT Feb 2023
62
HMMT Feb 2024
64
HMMT Feb 2025
63
BRUMO 2025
65

Reasoning Benchmarks

SimpleQA
63
MuSR
61

Frequently Asked Questions

How does GLM-4.7-Flash perform overall in AI benchmarks?

GLM-4.7-Flash ranks #50 out of 88 models with an overall score of 56. It is created by Zhipu AI and features a 200K context window.

Is GLM-4.7-Flash good for knowledge and understanding?

GLM-4.7-Flash ranks #50 out of 88 models in knowledge and understanding benchmarks with an average score of 63.8. There are stronger options in this category.

Is GLM-4.7-Flash good for coding and programming?

GLM-4.7-Flash ranks #51 out of 88 models in coding and programming benchmarks with an average score of 58. There are stronger options in this category.

Is GLM-4.7-Flash good for mathematics?

GLM-4.7-Flash ranks #50 out of 88 models in mathematics benchmarks with an average score of 65. There are stronger options in this category.

Is GLM-4.7-Flash good for reasoning and logic?

GLM-4.7-Flash ranks #52 out of 88 models in reasoning and logic benchmarks with an average score of 62. There are stronger options in this category.

Is GLM-4.7-Flash open source?

Yes, GLM-4.7-Flash is an open weight model created by Zhipu AI, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

What is the context window size of GLM-4.7-Flash?

GLM-4.7-Flash has a context window of 200K tokens, which determines how much text it can process in a single interaction.