Top AI models with dedicated reasoning capabilities, ranked by benchmark performance.
Unless noted otherwise, ranking surfaces on this page use BenchLM's provisional leaderboard lane rather than the stricter sourced-only verified leaderboard.
Bottom line: Reasoning models (chain-of-thought) dominate the top of the leaderboard. Claude Mythos Preview leads, but they cost more and are slower. Choose reasoning when accuracy matters more than speed.
According to BenchLM.ai, Claude Mythos Preview leads this ranking with a score of 99, followed by GPT-5.4 (93) and GPT-5.4 Pro (92). There is meaningful separation between the top models, suggesting genuine performance differences.
The best open-weight option is GLM-5.1 (ranked #6 with a score of 84). While proprietary models lead, open-weight options are within striking distance for teams willing to trade a few points of performance for full model control.
This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.
Claude Mythos Preview
Anthropic · 1M
Top reasoning model. Perfect agentic, coding, and multilingual scores.
GPT-5.4
OpenAI · 1.05M
GPT-5.4 Pro
OpenAI · 1.05M
Claude Mythos Preview leads all reasoning models with the highest overall score.
GPT-5.4 best OpenAI reasoning model — leads knowledge at 98.
GPT-5.4 Pro premium tier with perfect multimodal and math scores.
Best reasoning model overall?
Claude Mythos Preview — highest score across all categories
Complex knowledge reasoning?
GPT-5.4 — best knowledge + reasoning combo
Open-weight reasoning?
GLM-5 (Reasoning) — best open-weight reasoning model
Reasoning on a budget?
See the value rankings for cost-adjusted picks
Get notified when models move. One email a week with what changed and why.
Free. No spam. Unsubscribe anytime.
The top model is Claude Mythos Preview by Anthropic with a provisional score of 99.
The best open-weight model is GLM-5.1 at position #6.
43 models are included in this ranking.
Reasoning models use chain-of-thought to improve accuracy on complex tasks. They are ranked by the same overall BenchLM score. Reasoning models typically outperform standard models by 10-20 points on math and logic.
Reasoning models are slower and more expensive per token due to longer output chains. The speed/cost trade-off is not reflected in benchmark scores. For latency-sensitive applications, compare with non-reasoning models.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.