Head-to-head comparison across 0benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
Claude 4.1 Opus
52
GPT-4.1 nano
27
Benchmark data for Claude 4.1 Opus and GPT-4.1 nano is coming soon on BenchLM.
Claude 4.1 Opus
GPT-4.1 nano
$15 / $75
$0.1 / $0.4
29 t/s
181 t/s
1.66s
0.63s
200K
1M
Benchmark data for Claude 4.1 Opus and GPT-4.1 nano is coming soon on BenchLM.
BenchLM has partial data for these models, but not enough overlapping benchmark coverage to produce a fair score-level comparison yet.
Claude 4.1 Opus is priced at $15.00 input / $75.00 output per 1M tokens, versus $0.10 input / $0.40 output per 1M tokens for GPT-4.1 nano. GPT-4.1 nano has the larger context window at 1M, compared with 200K for Claude 4.1 Opus.
Not fully yet. BenchLM is tracking both models, but the sourced benchmark breakdown for this comparison is still coming soon.
BenchLM only shows category winners and benchmark-level calls when we have sourced results that can be compared fairly. For these models, the public benchmark coverage is not complete enough yet.
Claude 4.1 Opus: $15.00 input / $75.00 output per 1M tokens GPT-4.1 nano: $0.10 input / $0.40 output per 1M tokens Both model pages still include creator, context window, reasoning mode, and other metadata while benchmark coverage fills in.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.