Benchmark analysis of Gemma 4 26B A4B by Google across 8 sourced tests on BenchLM.
According to BenchLM.ai, Gemma 4 26B A4B ranks #43 out of 103 models with an overall score of 64/100. While not a frontier model, it offers specific advantages depending on the use case.
Gemma 4 26B A4B is a open weight model with a 256K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.
Gemma 4 26B A4B sits inside the Gemma 4 family alongside Gemma 4 31B, Gemma 4 E4B, Gemma 4 E2B. This profile currently has 8 of 125 tracked benchmarks. BenchLM only exposes verified benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.
Its strongest category is Multimodal & Grounded (#42), while its weakest is Knowledge (#49). This performance profile makes it particularly strong for screenshots, documents, charts, and grounded multimodal workflows.
Provider
GoogleSource Type
Open WeightReasoning
ReasoningContext Window
256K
Model Status
Current
Release Date
Apr 2, 2026Overall Score
Pricing
$0.00 / $0.00
Input / output per 1M
Runtime
N/A
Latency unavailable
Arena Elo
1440.64
Text Overall
Human-preference results from LM Arena text leaderboards. These are displayed separately from BenchLM benchmark scoring.
Text Overall
1440.64
±8.59 · 4,548 votes
Coding
1487.94
±17.55 · 1,044 votes
Math
1469.98
±33.66 · 267 votes
Instruction Following
1446.18
±16.2 · 1,204 votes
Creative Writing
1404.8
±21.49 · 750 votes
Multi-turn
1456.52
±20.07 · 811 votes
Hard Prompts
1463.97
±11.54 · 2,478 votes
Hard Prompts (English)
1473.34
±16.96 · 1,136 votes
Longer Query
1455.9
±16.45 · 1,222 votes
HLE w/o tools 2026 · Quarterly refresh · updated April 2, 2026
Gemma 4 26B A4B ranks #43 out of 103 models with an overall score of 64. It is created by Google and features a 256K context window.
Gemma 4 26B A4B ranks #49 out of 103 models in knowledge and understanding benchmarks with an average score of 56.1. There are stronger options in this category.
Gemma 4 26B A4B has visible benchmark coverage in coding and programming, but BenchLM does not currently assign it a global category rank there.
Gemma 4 26B A4B has visible benchmark coverage in reasoning and logic, but BenchLM does not currently assign it a global category rank there.
Gemma 4 26B A4B ranks #42 out of 103 models in multimodal and grounded tasks benchmarks with an average score of 73.8. There are stronger options in this category.
Yes, Gemma 4 26B A4B is an open weight model created by Google, meaning it can be downloaded and run locally or fine-tuned for specific use cases.
Gemma 4 26B A4B belongs to the Gemma 4 family. Related variants on BenchLM include Gemma 4 31B, Gemma 4 E4B, Gemma 4 E2B.
Not yet. Gemma 4 26B A4B currently has 8 verified benchmark scores out of the 125 benchmarks BenchLM tracks. BenchLM only exposes verified public benchmark rows, so missing categories stay blank until a sourced evaluation is available.
Gemma 4 26B A4B has a context window of 256K, which determines how much text it can process in a single interaction.
New model releases, benchmark scores, and leaderboard changes. Every Friday.
Free. Your signup is stored with a derived country code for compliance routing.