Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.
Gemma 4 26B A4B
64
1/8 categoriesMiniMax M2.7
~66
Winner · 0/8 categoriesGemma 4 26B A4B· MiniMax M2.7
Pick MiniMax M2.7 if you want the stronger benchmark profile. Gemma 4 26B A4B only becomes the better choice if coding is the priority or you want the cheaper token bill.
MiniMax M2.7 has the cleaner overall profile here, landing at 66 versus 64. It is a real lead, but still close enough that category-level strengths matter more than the headline number.
MiniMax M2.7 is also the more expensive model on tokens at $0.30 input / $1.20 output per 1M tokens, versus $0.00 input / $0.00 output per 1M tokens for Gemma 4 26B A4B. That is roughly Infinityx on output cost alone. Gemma 4 26B A4B is the reasoning model in the pair, while MiniMax M2.7 is not. That usually helps on harder chain-of-thought-heavy tests, but it can also mean more latency and more token spend in real use. Gemma 4 26B A4B gives you the larger context window at 256K, compared with 200K for MiniMax M2.7.
BenchLM keeps the benchmark table and the operator tradeoffs on the same page so a better score does not hide a materially slower, pricier, or smaller-context model.
Runtime metrics show N/A when BenchLM does not have a sourced snapshot for that exact model. The scoring rules and freshness policy are documented on the methodology page.
| Benchmark | Gemma 4 26B A4B | MiniMax M2.7 |
|---|---|---|
| Agentic | ||
| Terminal-Bench 2.0 | — | 57% |
| Tau2-Airline | — | 80.0% |
| Tau2-Telecom | — | 84.8% |
| PinchBench | — | 89.8% |
| BFCL v4 | — | 70.6% |
| Toolathlon | — | 46.3% |
| MLE-Bench Lite | — | 66.6% |
| MM-ClawBench | — | 62.7% |
| Claw-Eval | — | 51.9% |
| CodingGemma 4 26B A4B wins | ||
| LiveCodeBench | 77.1% | — |
| SWE-bench Verified* | — | 75.4% |
| SWE-bench Pro | — | 56.2% |
| SWE Multilingual | — | 76.5% |
| Multi-SWE Bench | — | 52.7% |
| VIBE-Pro | — | 55.6% |
| NL2Repo | — | 39.8% |
| Multimodal & Grounded | ||
| MMMU-Pro | 73.8% | — |
| GDPval-AA | — | 1495 |
| Reasoning | ||
| BBH | 64.8% | — |
| MRCRv2 | 44.1% | — |
| Knowledge | ||
| GPQA | 82.3% | — |
| MMLU-Pro | 82.6% | — |
| HLE | 17.2% | — |
| HLE w/o tools | 8.7% | — |
| GPQA-D | — | 86.2% |
| MMLU-Pro (Arcee) | — | 80.8% |
| Instruction Following | ||
| IFBench | — | 75.7% |
| Multilingual | ||
| Coming soon | ||
| Mathematics | ||
| AIME25 (Arcee) | — | 80.0% |
MiniMax M2.7 is ahead overall, 66 to 64.
Gemma 4 26B A4B has the edge for coding in this comparison, averaging 77.1 versus 56.2. MiniMax M2.7 stays close enough that the answer can still flip depending on your workload.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.