Benchmark analysis of GLM-4.5 by Tsinghua across 14 tests.
Creator
Tsinghua
Source Type
ProprietaryReasoning
Non-ReasoningContext Window
128K
Overall Score
GLM-4.5 ranks #80 out of 88 models with an overall score of 28. It is created by Tsinghua and features a 128K context window.
GLM-4.5 ranks #80 out of 88 models in knowledge and understanding benchmarks with an average score of 34.8. There are stronger options in this category.
GLM-4.5 ranks #80 out of 88 models in coding and programming benchmarks with an average score of 29. There are stronger options in this category.
GLM-4.5 ranks #80 out of 88 models in mathematics benchmarks with an average score of 36. There are stronger options in this category.
GLM-4.5 ranks #80 out of 88 models in reasoning and logic benchmarks with an average score of 34. There are stronger options in this category.
GLM-4.5 has a context window of 128K tokens, which determines how much text it can process in a single interaction.