Head-to-head comparison across 2 benchmark categories
Claude Mythos Preview
84
GPT-5.3 Codex
85
Pick GPT-5.3 Codex if you want the stronger benchmark profile. Claude Mythos Preview only becomes the better choice if coding is the priority or you need the larger 1M context window.
Agentic
+9.5 difference
Coding
+16.4 difference
Claude Mythos Preview
GPT-5.3 Codex
$25 / $125
$2.5 / $10
N/A
79 t/s
N/A
88.26s
1M
400K
Pick GPT-5.3 Codex if you want the stronger benchmark profile. Claude Mythos Preview only becomes the better choice if coding is the priority or you need the larger 1M context window.
GPT-5.3 Codex finishes one point ahead overall, 85 to 84. That is enough to call, but not enough to treat as a blowout. This matchup comes down to a few meaningful edges rather than one model dominating the board.
Claude Mythos Preview is also the more expensive model on tokens at $25.00 input / $125.00 output per 1M tokens, versus $2.50 input / $10.00 output per 1M tokens for GPT-5.3 Codex. That is roughly 12.5x on output cost alone. Claude Mythos Preview gives you the larger context window at 1M, compared with 400K for GPT-5.3 Codex.
GPT-5.3 Codex is ahead overall, 85 to 84. The biggest single separator in this matchup is SWE-bench Pro, where the scores are 77.8% and 56.8%.
Claude Mythos Preview has the edge for coding in this comparison, averaging 83.8 versus 67.4. Inside this category, SWE-bench Pro is the benchmark that creates the most daylight between them.
Claude Mythos Preview has the edge for agentic tasks in this comparison, averaging 80.9 versus 71.4. Inside this category, OSWorld-Verified is the benchmark that creates the most daylight between them.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.