Head-to-head comparison across 2benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
GPT-5.4
89
Laguna M.1
46
Verified leaderboard positions: GPT-5.4 #12 · Laguna M.1 unranked
Pick GPT-5.4 if you want the stronger benchmark profile. Laguna M.1 only becomes the better choice if you want the cheaper token bill.
Agentic
+36.3 difference
Coding
+1.3 difference
GPT-5.4
Laguna M.1
$2.5 / $15
$0 / $0
74 t/s
N/A
151.79s
N/A
1.05M
131K
Pick GPT-5.4 if you want the stronger benchmark profile. Laguna M.1 only becomes the better choice if you want the cheaper token bill.
GPT-5.4 is clearly ahead on the provisional aggregate, 89 to 46. The gap is large enough that you do not need to squint at the spreadsheet to see the difference.
GPT-5.4's sharpest advantage is in agentic, where it averages 77 against 40.7. The single biggest benchmark swing on the page is Terminal-Bench 2.0, 75.1% to 40.7%.
GPT-5.4 is also the more expensive model on tokens at $2.50 input / $15.00 output per 1M tokens, versus $0.00 input / $0.00 output per 1M tokens for Laguna M.1. That is roughly Infinityx on output cost alone. GPT-5.4 gives you the larger context window at 1.05M, compared with 131K for Laguna M.1.
GPT-5.4 is ahead on BenchLM's provisional leaderboard, 89 to 46. The biggest single separator in this matchup is Terminal-Bench 2.0, where the scores are 75.1% and 40.7%.
GPT-5.4 has the edge for coding in this comparison, averaging 57.7 versus 56.4. Inside this category, SWE-bench Pro is the benchmark that creates the most daylight between them.
GPT-5.4 has the edge for agentic tasks in this comparison, averaging 77 versus 40.7. Inside this category, Terminal-Bench 2.0 is the benchmark that creates the most daylight between them.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.