Head-to-head comparison across 2benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
Gemma 4 31B
74
GPT-5.4 mini
73
Pick Gemma 4 31B if you want the stronger benchmark profile. GPT-5.4 mini only becomes the better choice if you need the larger 400K context window.
Knowledge
+3.9 difference
Multimodal
+0.3 difference
Gemma 4 31B
GPT-5.4 mini
$0 / $0
$0.75 / $4.5
N/A
201 t/s
N/A
3.85s
256K
400K
Pick Gemma 4 31B if you want the stronger benchmark profile. GPT-5.4 mini only becomes the better choice if you need the larger 400K context window.
Gemma 4 31B finishes one point ahead on BenchLM's provisional leaderboard, 74 to 73. That is enough to call, but not enough to treat as a blowout. This matchup comes down to a few meaningful edges rather than one model dominating the board.
Gemma 4 31B's sharpest advantage is in knowledge, where it averages 61.3 against 57.4. The single biggest benchmark swing on the page is HLE, 26.5% to 41.5%.
GPT-5.4 mini is also the more expensive model on tokens at $0.75 input / $4.50 output per 1M tokens, versus $0.00 input / $0.00 output per 1M tokens for Gemma 4 31B. That is roughly Infinityx on output cost alone. GPT-5.4 mini gives you the larger context window at 400K, compared with 256K for Gemma 4 31B.
Gemma 4 31B is ahead on BenchLM's provisional leaderboard, 74 to 73. The biggest single separator in this matchup is HLE, where the scores are 26.5% and 41.5%.
Gemma 4 31B has the edge for knowledge tasks in this comparison, averaging 61.3 versus 57.4. Inside this category, HLE is the benchmark that creates the most daylight between them.
Gemma 4 31B has the edge for multimodal and grounded tasks in this comparison, averaging 76.9 versus 76.6. Inside this category, MMMU-Pro is the benchmark that creates the most daylight between them.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.