Head-to-head comparison across 2benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
Claude 3.5 Sonnet
41
GPT-4.1
58
Pick GPT-4.1 if you want the stronger benchmark profile. Claude 3.5 Sonnet only becomes the better choice if its workflow or ecosystem matters more than the raw scoreboard.
Coding
+5.6 difference
Knowledge
+6.9 difference
Claude 3.5 Sonnet
GPT-4.1
$3 / $15
$2 / $8
N/A
108 t/s
N/A
1.02s
200K
1M
Pick GPT-4.1 if you want the stronger benchmark profile. Claude 3.5 Sonnet only becomes the better choice if its workflow or ecosystem matters more than the raw scoreboard.
GPT-4.1 is clearly ahead on the provisional aggregate, 58 to 41. The gap is large enough that you do not need to squint at the spreadsheet to see the difference.
GPT-4.1's sharpest advantage is in knowledge, where it averages 66.3 against 59.4. The single biggest benchmark swing on the page is GPQA, 59.4% to 66.3%.
Claude 3.5 Sonnet is also the more expensive model on tokens at $3.00 input / $15.00 output per 1M tokens, versus $2.00 input / $8.00 output per 1M tokens for GPT-4.1. GPT-4.1 gives you the larger context window at 1M, compared with 200K for Claude 3.5 Sonnet.
GPT-4.1 is ahead on BenchLM's provisional leaderboard, 58 to 41. The biggest single separator in this matchup is GPQA, where the scores are 59.4% and 66.3%.
GPT-4.1 has the edge for knowledge tasks in this comparison, averaging 66.3 versus 59.4. Inside this category, GPQA is the benchmark that creates the most daylight between them.
GPT-4.1 has the edge for coding in this comparison, averaging 54.6 versus 49. Inside this category, SWE-bench Verified is the benchmark that creates the most daylight between them.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.