Skip to main content

GLM-5.1

Z.AICurrentReleased Apr 7, 2026
Overall Score
83Prov. #14 of 115Verified #21 of 23
Arena Elo
1471
Categories Ranked
5of 8
Price (1M tokens)
$1.4 in / $4.4 out
Speed
N/A
Context
203K
Open WeightSelf-hostReasoning
Confidence
snapshot

Self-host vs API cost

Estimates at 50,000 req/day · 1000 tokens/req average.

GLM-5.1
API / mo$4,350
Self-host / mo$18,221
Break-even264M/day
Model the full break-even

According to BenchLM.ai, GLM-5.1 ranks #14 out of 115 models on the provisional leaderboard with an overall score of 83/100. It also ranks #21 out of 23 on the verified leaderboard. This places it in the upper tier of AI models, with competitive scores across most benchmark categories.

GLM-5.1 is a open weight model with a 203K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.

GLM-5.1 sits inside the GLM-5 family alongside GLM-5, GLM-5 (Reasoning), GLM-5V-Turbo, GLM-5-Turbo. BenchLM links it directly to GLM-5 as the earlier related model in that lineage. This profile currently has 16 of 188 tracked benchmarks. BenchLM only exposes non-generated benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.

Its strongest category is Knowledge (#9), while its weakest is Reasoning (#31). This performance profile makes it particularly effective for knowledge-intensive tasks like research, analysis, and factual Q&A.

Ranking Distribution

Category rank across 6 benchmark categories — sorted by best rank

Category Performance

Scores across all benchmark categories (0-100 scale)

Category Breakdown

Agentic

81.0/ 100
Weight: 22%6 benchmarks
Terminal-Bench 2.0BrowseCompOSWorld-VerifiedGAIATAU-benchWebArena

Coding

#12
83.9/ 100
Weight: 20%4 benchmarks
SWE-bench VerifiedLiveCodeBenchSWE-bench ProSWE-RebenchSciCode

Reasoning

#31
63.9/ 100
Weight: 17%0 benchmarks
MuSRLongBench v2MRCRv2ARC-AGI-2

Knowledge

#9
85.1/ 100
Weight: 12%2 benchmarks
GPQASuperGPQAMMLU-ProHLEFrontierScienceSimpleQA

Math

#15
89.6/ 100
Weight: 5%4 benchmarks
AIME 2025BRUMO 2025MATH-500FrontierMath

Multilingual

0.0/ 100
Weight: 7%0 benchmarks
MGSMMMLU-ProX

Multimodal

0.0/ 100
Weight: 12%0 benchmarks
MMMU-ProOfficeQA ProCharXivCharXiv w/o tools

Inst. Following

#9
92.7/ 100
Weight: 5%0 benchmarks
IFEvalIFBench

Chatbot Arena Performance

Text Overall1471CI: ±6.111,071 votes
Coding1524CI: ±11.42,679 votes
Math1469CI: ±21.3714 votes
Instruction Following1463CI: ±10.03,246 votes
Creative Writing1454CI: ±14.51,739 votes
Multi-turn1477CI: ±14.21,678 votes
Hard Prompts1493CI: ±7.56,440 votes
Hard Prompts (English)1500CI: ±10.43,124 votes
Longer Query1491CI: ±9.93,489 votes

Benchmark Details

Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.

GLM-5 Family

snapshot · 5.1

Canonical Entry

GLM-5

Related Earlier Model

GLM-5

Frequently Asked Questions

How does GLM-5.1 perform overall in AI benchmarks?

GLM-5.1 currently ranks #14 out of 115 models on BenchLM's provisional leaderboard with an overall score of 83. It also ranks #21 out of 23 on the verified leaderboard. It is created by Z.AI and features a 203K context window.

Is GLM-5.1 good for knowledge and understanding?

GLM-5.1 ranks #9 out of 115 models in knowledge and understanding benchmarks with an average score of 85.1. It is among the top performers in this category.

Is GLM-5.1 good for coding and programming?

GLM-5.1 ranks #12 out of 115 models in coding and programming benchmarks with an average score of 83.9. There are stronger options in this category.

Is GLM-5.1 good for mathematics?

GLM-5.1 ranks #15 out of 115 models in mathematics benchmarks with an average score of 89.6. There are stronger options in this category.

Is GLM-5.1 good for agentic tool use and computer tasks?

GLM-5.1 has visible benchmark coverage in agentic tool use and computer tasks, but BenchLM does not currently assign it a global category rank there.

Is GLM-5.1 open source?

Yes, GLM-5.1 is an open weight model created by Z.AI, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

Which sibling models are related to GLM-5.1?

GLM-5.1 belongs to the GLM-5 family. Related variants on BenchLM include GLM-5, GLM-5 (Reasoning), GLM-5V-Turbo, GLM-5-Turbo.

Does GLM-5.1 have full benchmark coverage on BenchLM?

Not yet. GLM-5.1 currently has 16 published benchmark scores out of the 188 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.

What is the context window size of GLM-5.1?

GLM-5.1 has a context window of 203K, which determines how much text it can process in a single interaction.

Last updated: May 11, 2026 · Runtime metrics stay blank until BenchLM has a sourced snapshot.

Don't miss the next GPT moment

Which models moved up, what’s new, and what it costs. One email a week, 3-min read.

Free. One email per week.