Head-to-head comparison across 1benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
GPT-4.1
58
Laguna M.1
46
Pick GPT-4.1 if you want the stronger benchmark profile. Laguna M.1 only becomes the better choice if coding is the priority or you want the cheaper token bill.
Coding
+1.8 difference
GPT-4.1
Laguna M.1
$2 / $8
$0 / $0
108 t/s
N/A
1.02s
N/A
1M
131K
Pick GPT-4.1 if you want the stronger benchmark profile. Laguna M.1 only becomes the better choice if coding is the priority or you want the cheaper token bill.
GPT-4.1 is clearly ahead on the provisional aggregate, 58 to 46. The gap is large enough that you do not need to squint at the spreadsheet to see the difference.
GPT-4.1 is also the more expensive model on tokens at $2.00 input / $8.00 output per 1M tokens, versus $0.00 input / $0.00 output per 1M tokens for Laguna M.1. That is roughly Infinityx on output cost alone. Laguna M.1 is the reasoning model in the pair, while GPT-4.1 is not. That usually helps on harder chain-of-thought-heavy tests, but it can also mean more latency and more token spend in real use. GPT-4.1 gives you the larger context window at 1M, compared with 131K for Laguna M.1.
GPT-4.1 is ahead on BenchLM's provisional leaderboard, 58 to 46. The biggest single separator in this matchup is SWE-bench Verified, where the scores are 54.6% and 72.5%.
Laguna M.1 has the edge for coding in this comparison, averaging 56.4 versus 54.6. Inside this category, SWE-bench Verified is the benchmark that creates the most daylight between them.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.