Reasoning Benchmarks
Logical reasoning and problem solving - Compare AI models across 2 logical reasoning benchmarks including SimpleQA, MuSR, and more.
Filters & Search
Filter models by creator, type, reasoning, or search by name to find the perfect AI model for your needs
Reasoning Benchmark Results
Showing 25 of 52 models • Click column headers to sort
1 GPT-5 (high) OpenAI | OpenAI | Proprietary | Reasoning | 128K | 72 | 89% | 87% |
2 o1-preview OpenAI | OpenAI | Proprietary | Reasoning | 200K | 71 | 88% | 86% |
3 GPT-5 (medium) OpenAI | OpenAI | Proprietary | Reasoning | 128K | 70 | 87% | 85% |
4 Grok 4 xAI | xAI | Proprietary | Non-Reasoning | 128K | 69 | 83% | 81% |
5 GPT-5 mini OpenAI | OpenAI | Proprietary | Reasoning | 128K | 68 | 84% | 82% |
6 o3-pro OpenAI | OpenAI | Proprietary | Reasoning | 200K | 68 | 86% | 84% |
7 o3 OpenAI | OpenAI | Proprietary | Reasoning | 200K | 67 | 84% | 82% |
8 Qwen2.5-1M Alibaba | Alibaba | Open Weight | Non-Reasoning | 1M | 66 | 81% | 79% |
9 Qwen2.5-72B Alibaba | Alibaba | Open Weight | Non-Reasoning | 128K | 65 | 80% | 78% |
10 o4-mini (high) OpenAI | OpenAI | Proprietary | Non-Reasoning | 200K | 65 | 80% | 78% |
11 Gemini 2.5 Pro Google | Proprietary | Non-Reasoning | 2M | 65 | 81% | 79% | |
12 DeepSeek Coder 2.0 DeepSeek | DeepSeek | Open Weight | Non-Reasoning | 128K | 64 | 78% | 76% |
13 DeepSeek LLM 2.0 DeepSeek | DeepSeek | Open Weight | Non-Reasoning | 128K | 63 | 77% | 75% |
14 Claude 4.1 Opus Anthropic | Anthropic | Proprietary | Non-Reasoning | 200K | 61 | 74% | 72% |
15 Claude 4 Sonnet Anthropic | Anthropic | Proprietary | Non-Reasoning | 200K | 59 | 71% | 69% |
16 Llama 3.1 405B Meta | Meta | Open Weight | Non-Reasoning | 128K | 58 | 68% | 66% |
17 Mistral Large 2 Mistral | Mistral | Proprietary | Non-Reasoning | 128K | 57 | 66% | 64% |
18 GPT-4o OpenAI | OpenAI | Proprietary | Non-Reasoning | 128K | 56 | 64% | 62% |
19 Claude 3.5 Sonnet Anthropic | Anthropic | Proprietary | Non-Reasoning | 200K | 55 | 63% | 61% |
20 Gemini 1.5 Pro Google | Proprietary | Non-Reasoning | 2M | 54 | 62% | 60% | |
21 Mistral 8x7B Mistral | Mistral | Open Weight | Non-Reasoning | 32K | 52 | 63% | 61% |
22 Gemini 1.0 Pro Google | Proprietary | Non-Reasoning | 32K | 52 | 60% | 58% | |
23 Claude 3 Opus Anthropic | Anthropic | Proprietary | Non-Reasoning | 200K | 51 | 59% | 57% |
24 GPT-4 Turbo OpenAI | OpenAI | Proprietary | Non-Reasoning | 128K | 50 | 58% | 56% |
25 Llama 3 70B Meta | Meta | Open Weight | Non-Reasoning | 128K | 48 | 56% | 54% |
Showing 25 of 52 models
About Reasoning Benchmarks
SimpleQA
Factual question answering benchmark
MuSR
Complex multi-step reasoning problems