Head-to-head comparison across 1benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
Composer 2
73
GPT-5.4 mini
71
Pick Composer 2 if you want the stronger benchmark profile. GPT-5.4 mini only becomes the better choice if agentic is the priority or you need the larger 400K context window.
Agentic
+3.9 difference
Composer 2
GPT-5.4 mini
$0.5 / $2.5
$0.75 / $4.5
N/A
201 t/s
N/A
3.85s
200K
400K
Pick Composer 2 if you want the stronger benchmark profile. GPT-5.4 mini only becomes the better choice if agentic is the priority or you need the larger 400K context window.
Composer 2 has the cleaner provisional overall profile here, landing at 73 versus 71. It is a real lead, but still close enough that category-level strengths matter more than the headline number.
GPT-5.4 mini is also the more expensive model on tokens at $0.75 input / $4.50 output per 1M tokens, versus $0.50 input / $2.50 output per 1M tokens for Composer 2. GPT-5.4 mini gives you the larger context window at 400K, compared with 200K for Composer 2.
Composer 2 is ahead on BenchLM's provisional leaderboard, 73 to 71. The biggest single separator in this matchup is Terminal-Bench 2.0, where the scores are 61.7% and 60%.
GPT-5.4 mini has the edge for agentic tasks in this comparison, averaging 65.6 versus 61.7. Inside this category, Terminal-Bench 2.0 is the benchmark that creates the most daylight between them.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.