Head-to-head comparison across 4benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
Claude Mythos Preview
99
Qwen3.6-35B-A3B
64
Verified leaderboard positions: Claude Mythos Preview #1 · Qwen3.6-35B-A3B #13
Pick Claude Mythos Preview if you want the stronger benchmark profile. Qwen3.6-35B-A3B only becomes the better choice if its workflow or ecosystem matters more than the raw scoreboard.
Agentic
+30.9 difference
Coding
+16.9 difference
Knowledge
+14.4 difference
Multimodal
+17.4 difference
Claude Mythos Preview
Qwen3.6-35B-A3B
$25 / $125
N/A
N/A
N/A
N/A
N/A
1M
262K
Pick Claude Mythos Preview if you want the stronger benchmark profile. Qwen3.6-35B-A3B only becomes the better choice if its workflow or ecosystem matters more than the raw scoreboard.
Claude Mythos Preview is clearly ahead on the provisional aggregate, 99 to 64. The gap is large enough that you do not need to squint at the spreadsheet to see the difference.
Claude Mythos Preview's sharpest advantage is in agentic, where it averages 82.4 against 51.5. The single biggest benchmark swing on the page is HLE, 64.7% to 21.4%.
Claude Mythos Preview gives you the larger context window at 1M, compared with 262K for Qwen3.6-35B-A3B.
Claude Mythos Preview is ahead on BenchLM's provisional leaderboard, 99 to 64. The biggest single separator in this matchup is HLE, where the scores are 64.7% and 21.4%.
Claude Mythos Preview has the edge for knowledge tasks in this comparison, averaging 74.9 versus 60.5. Inside this category, HLE is the benchmark that creates the most daylight between them.
Claude Mythos Preview has the edge for coding in this comparison, averaging 83.8 versus 66.9. Inside this category, SWE-bench Pro is the benchmark that creates the most daylight between them.
Claude Mythos Preview has the edge for agentic tasks in this comparison, averaging 82.4 versus 51.5. Inside this category, Terminal-Bench 2.0 is the benchmark that creates the most daylight between them.
Claude Mythos Preview has the edge for multimodal and grounded tasks in this comparison, averaging 92.7 versus 75.3. Inside this category, MMMU-Pro is the benchmark that creates the most daylight between them.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.