Head-to-head comparison across 3benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
GPT-5.4 mini
73
GPT-5.5
89
Verified leaderboard positions: GPT-5.4 mini unranked · GPT-5.5 #2
Pick GPT-5.5 if you want the stronger benchmark profile. GPT-5.4 mini only becomes the better choice if multimodal & grounded is the priority or you want the cheaper token bill.
Agentic
+16.2 difference
Knowledge
+9.0 difference
Multimodal
+7.6 difference
GPT-5.4 mini
GPT-5.5
$0.75 / $4.5
$5 / $30
201 t/s
N/A
3.85s
N/A
400K
1M
Pick GPT-5.5 if you want the stronger benchmark profile. GPT-5.4 mini only becomes the better choice if multimodal & grounded is the priority or you want the cheaper token bill.
GPT-5.5 is clearly ahead on the provisional aggregate, 89 to 73. The gap is large enough that you do not need to squint at the spreadsheet to see the difference.
GPT-5.5's sharpest advantage is in agentic, where it averages 81.8 against 65.6. The single biggest benchmark swing on the page is Terminal-Bench 2.0, 60% to 82.7%. GPT-5.4 mini does hit back in multimodal & grounded, so the answer changes if that is the part of the workload you care about most.
GPT-5.5 is also the more expensive model on tokens at $5.00 input / $30.00 output per 1M tokens, versus $0.75 input / $4.50 output per 1M tokens for GPT-5.4 mini. That is roughly 6.7x on output cost alone. GPT-5.5 gives you the larger context window at 1M, compared with 400K for GPT-5.4 mini.
GPT-5.5 is ahead on BenchLM's provisional leaderboard, 89 to 73. The biggest single separator in this matchup is Terminal-Bench 2.0, where the scores are 60% and 82.7%.
GPT-5.5 has the edge for knowledge tasks in this comparison, averaging 66.4 versus 57.4. Inside this category, HLE w/o tools is the benchmark that creates the most daylight between them.
GPT-5.5 has the edge for agentic tasks in this comparison, averaging 81.8 versus 65.6. Inside this category, Terminal-Bench 2.0 is the benchmark that creates the most daylight between them.
GPT-5.4 mini has the edge for multimodal and grounded tasks in this comparison, averaging 76.6 versus 69. Inside this category, MMMU-Pro w/ Python is the benchmark that creates the most daylight between them.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.