Head-to-head comparison across 3benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
Claude Mythos Preview
99
Grok 4.20
78
Verified leaderboard positions: Claude Mythos Preview #1 · Grok 4.20 unranked
Pick Claude Mythos Preview if you want the stronger benchmark profile. Grok 4.20 only becomes the better choice if you want the cheaper token bill or you need the larger 2M context window.
Agentic
+35.3 difference
Coding
+22.8 difference
Multimodal
+17.5 difference
Claude Mythos Preview
Grok 4.20
$25 / $125
$2 / $6
N/A
233 t/s
N/A
10.33s
1M
2M
Pick Claude Mythos Preview if you want the stronger benchmark profile. Grok 4.20 only becomes the better choice if you want the cheaper token bill or you need the larger 2M context window.
Claude Mythos Preview is clearly ahead on the provisional aggregate, 99 to 78. The gap is large enough that you do not need to squint at the spreadsheet to see the difference.
Claude Mythos Preview's sharpest advantage is in agentic, where it averages 82.4 against 47.1. The single biggest benchmark swing on the page is Terminal-Bench 2.0, 82% to 47.1%.
Claude Mythos Preview is also the more expensive model on tokens at $25.00 input / $125.00 output per 1M tokens, versus $2.00 input / $6.00 output per 1M tokens for Grok 4.20. That is roughly 20.8x on output cost alone. Grok 4.20 gives you the larger context window at 2M, compared with 1M for Claude Mythos Preview.
Claude Mythos Preview is ahead on BenchLM's provisional leaderboard, 99 to 78. The biggest single separator in this matchup is Terminal-Bench 2.0, where the scores are 82% and 47.1%.
Claude Mythos Preview has the edge for coding in this comparison, averaging 83.8 versus 61. Inside this category, SWE-bench Pro is the benchmark that creates the most daylight between them.
Claude Mythos Preview has the edge for agentic tasks in this comparison, averaging 82.4 versus 47.1. Inside this category, Terminal-Bench 2.0 is the benchmark that creates the most daylight between them.
Claude Mythos Preview has the edge for multimodal and grounded tasks in this comparison, averaging 92.7 versus 75.2. Inside this category, CharXiv is the benchmark that creates the most daylight between them.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.