Skip to main content

Best Non-Reasoning LLMs in 2026

Top standard AI models (no chain-of-thought reasoning) ranked by benchmark performance. Faster and cheaper than reasoning models.

Unless noted otherwise, ranking surfaces on this page use BenchLM's provisional leaderboard lane rather than the stricter sourced-only verified leaderboard.

Bottom line: Non-reasoning models are faster and cheaper than chain-of-thought alternatives. Gemini 3.1 Pro leads this tier — proving that strong reasoning scores are possible without dedicated thinking tokens.

According to BenchLM.ai, Claude Opus 4.7 leads this ranking with a score of 97, followed by Gemini 3.1 Pro (93) and Claude Opus 4.6 (91). There is meaningful separation between the top models, suggesting genuine performance differences.

The best open-weight option is GLM-5 (ranked #8 with a score of 77). While proprietary models lead, open-weight options are within striking distance for teams willing to trade a few points of performance for full model control.

This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.

What changed

Gemini 3.1 Pro leads non-reasoning models — best reasoning (97), knowledge (96), and multilingual (100).

Claude Opus 4.6 most consistent non-reasoning model across all 8 categories.

Claude Sonnet 4.6 strong mid-tier with best multimodal (95) in this tier.

How to choose

Full Rankings (67 models)

Claude Opus 4.7
Anthropic·Proprietary·1M

97

prov. overall

Gemini 3.1 Pro
Google·Proprietary·1M

93

prov. overall

Claude Opus 4.6
Anthropic·Proprietary·1M

91

prov. overall

4
Claude Sonnet 4.6
Anthropic·Proprietary·200K

85

prov. overall

5
Gemini 3 Pro
Google·Proprietary·2M

83

prov. overall

6
Claude Opus 4.5
Anthropic·Proprietary·200K

80

prov. overall

7
Grok 4.1
xAI·Proprietary·1M

80

prov. overall

8
GLM-5
Z.AI·Open Weight·200K

77

prov. overall

9
Grok 4.1 Fast
xAI·Proprietary·1M

72

prov. overall

10
Kimi K2.5
Moonshot AI·Open Weight·256K

68

prov. overall

11
Claude Sonnet 4.5
Anthropic·Proprietary·200K

68

prov. overall

12
Gemini 2.5 Pro
Google·Proprietary·1M

67

prov. overall

13
Gemini 3 Flash
Google·Proprietary·1M

67

prov. overall

14
Grok 4
xAI·Proprietary·128K

67

prov. overall

15
Qwen3.5 397B
Alibaba·Open Weight·128K

66

prov. overall

16
MiniMax M2.7
MiniMax·Open Weight·200K

64

prov. overall

17
DeepSeek V3.2
DeepSeek·Open Weight·128K

60

prov. overall

18
GPT-4.1
OpenAI·Proprietary·1M

60

prov. overall

19
Claude Haiku 4.5
Anthropic·Proprietary·200K

59

prov. overall

20
Claude 4.1 Opus
Anthropic·Proprietary·200K

53

prov. overall

21
DeepSeek Coder 2.0
DeepSeek·Open Weight·128K

53

prov. overall

22
DeepSeek LLM 2.0
DeepSeek·Open Weight·128K

53

prov. overall

23
Qwen2.5-1M
Alibaba·Open Weight·1M

53

prov. overall

24
Claude 4 Sonnet
Anthropic·Proprietary·200K

52

prov. overall

25
Mistral Large 3
Mistral·Proprietary·128K

52

prov. overall

26
Qwen2.5-72B
Alibaba·Open Weight·128K

52

prov. overall

27
Gemini 3.1 Flash-Lite
Google·Proprietary·1M

51

prov. overall

28
GPT-4.1 mini
OpenAI·Proprietary·1M

47

prov. overall

29
Nemotron 3 Super 100B
NVIDIA·Open Weight·1M

46

prov. overall

30
GPT-4o mini
OpenAI·Proprietary·128K

45

prov. overall

31
Kimi K2
Moonshot AI·Proprietary·128K

43

prov. overall

32
Llama 3.1 405B
Meta·Open Weight·128K

43

prov. overall

33
Claude 3.5 Sonnet
Anthropic·Proprietary·200K

42

prov. overall

34
Grok Code Fast 1
xAI·Proprietary·256K

42

prov. overall

35
GPT-4o
OpenAI·Proprietary·128K

41

prov. overall

36
Gemini 2.5 Flash
Google·Proprietary·1M

40

prov. overall

37
Mistral Large 2
Mistral·Proprietary·128K

40

prov. overall

38
GPT-OSS 120B
OpenAI·Open Weight·128K

38

prov. overall

39
DeepSeek V3
DeepSeek·Open Weight·128K

37

prov. overall

40
Gemini 1.5 Pro
Google·Proprietary·2M

37

prov. overall

41
Claude 3 Opus
Anthropic·Proprietary·200K

36

prov. overall

42
Qwen3 235B 2507
Alibaba·Open Weight·128K

35

prov. overall

43
Grok 3 [Beta]
xAI·Proprietary·128K

34

prov. overall

44
DBRX Instruct
Databricks·Open Weight·32K

33

prov. overall

45
GLM-4.5
Z.AI·Proprietary·128K

29

prov. overall

46
Phi-4
Microsoft·Open Weight·16K

29

prov. overall

47
DeepSeek V3.1
DeepSeek·Open Weight·128K

28

prov. overall

48
GPT-4.1 nano
OpenAI·Proprietary·1M

28

prov. overall

49
Llama 3 70B
Meta·Open Weight·128K

28

prov. overall

50
GPT-4 Turbo
OpenAI·Proprietary·128K

27

prov. overall

51
Nemotron 3 Nano 30B
NVIDIA·Open Weight·32K

27

prov. overall

52
Gemini 1.0 Pro
Google·Proprietary·32K

25

prov. overall

53
Mistral 8x7B
Mistral·Open Weight·32K

25

prov. overall

54
Z-1
Z·Proprietary·128K

25

prov. overall

55
Claude 3 Haiku
Anthropic·Proprietary·200K

24

prov. overall

56
Llama 4 Scout
Meta·Open Weight·10M

24

prov. overall

57
Mixtral 8x22B Instruct v0.1
Mistral·Open Weight·64K

24

prov. overall

58
Moonshot v1
Moonshot AI·Proprietary·128K

24

prov. overall

59
Nemotron-4 15B
NVIDIA·Open Weight·32K

24

prov. overall

60
GLM-4.5-Air
Z.AI·Proprietary·128K

21

prov. overall

61
GPT-OSS 20B
OpenAI·Open Weight·128K

19

prov. overall

62
Gemma 3 27B
Google·Open Weight·32K

18

prov. overall

63
Llama 4 Maverick
Meta·Open Weight·1M

18

prov. overall

64
Llama 4 Behemoth
Meta·Open Weight·32K

12

prov. overall

65
Nova Pro
Amazon·Proprietary·128K

11

prov. overall

66
Mistral 7B v0.3
Mistral·Open Weight·32K

5

prov. overall

67
Mistral 8x7B v0.2
Mistral·Open Weight·32K

2

prov. overall

These rankings update weekly

Get notified when models move. One email a week with what changed and why.

Free. No spam. Unsubscribe anytime.

Key Takeaways

The top model is Claude Opus 4.7 by Anthropic with a provisional score of 97.

The best open-weight model is GLM-5 at position #8.

67 models are included in this ranking.

Score in Context

What these scores mean

Non-reasoning models are standard completion/chat models without dedicated chain-of-thought. They are ranked by the same overall BenchLM score and are typically faster and cheaper per token.

Known limitations

The "non-reasoning" label excludes models with explicit chain-of-thought (like o3, DeepSeek R1). Some non-reasoning models still reason internally — the distinction is about architecture and pricing, not capability.

Last updated: April 21, 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.