Skip to main content

Best Open Source LLMs in 2026

Open-weight models have closed much of the gap with proprietary ones. The best open models now score within 5-10 points of the top closed APIs on most benchmarks. DeepSeek, Meta Llama, Alibaba Qwen, Zhipu GLM, and Mistral all ship strong open options — some of them reasoning models that match proprietary performance on math and coding. The main trade-offs are context window size (most cap at 128K vs 1M+ for top proprietary models) and agentic performance, where proprietary models still hold a wider lead. Self-hosting also shifts infrastructure burden to you, so factor in serving costs.

Unless noted otherwise, ranking surfaces on this page use BenchLM's provisional leaderboard lane rather than the stricter sourced-only verified leaderboard.

Bottom line: Open-weight models are within 5-10 points of the best proprietary APIs. GLM-5 (Reasoning) leads, but DeepSeek and Llama are strong alternatives.

According to BenchLM.ai, GLM-5.1 leads this ranking with a score of 84, followed by GLM-5 (Reasoning) (84) and Kimi 2.6 (83). The top three are separated by just a few points — any of them would perform well for this use case.

All models in this ranking are open-weight, meaning they can be self-hosted for maximum control and cost efficiency.

This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.

What changed

GLM-5 (Reasoning) leads all open-weight models with the highest overall score.

DeepSeek R1 competitive on reasoning and math benchmarks.

Llama 4 Maverick Meta's strongest entry, good on coding and reasoning.

How to choose

Full Rankings (49 models)

GLM-5.1
Z.AI·Open Weight·203K

84

prov. overall

GLM-5 (Reasoning)
Z.AI·Open Weight·200K

84

prov. overall

Kimi 2.6
Moonshot AI·Open Weight·256K

83

prov. overall

4
Qwen3.5 397B (Reasoning)
Alibaba·Open Weight·128K

80

prov. overall

5
GLM-5
Z.AI·Open Weight·200K

77

prov. overall

6
GLM-4.7
Z.AI·Open Weight·200K

71

prov. overall

7
Qwen3.6-35B-A3B
Alibaba·Open Weight·262K

70

prov. overall

8
Kimi K2.5
Moonshot AI·Open Weight·256K

68

prov. overall

9
Qwen3.5-122B-A10B
Alibaba·Open Weight·262K

68

prov. overall

10
Qwen3.5 397B
Alibaba·Open Weight·128K

66

prov. overall

11
Gemma 4 31B
Google·Open Weight·256K

66

prov. overall

12
Qwen3.5-27B
Alibaba·Open Weight·262K

65

prov. overall

13
DeepSeek V3.2 (Thinking)
DeepSeek·Open Weight·128K

65

prov. overall

14
MiniMax M2.7
MiniMax·Open Weight·200K

64

prov. overall

15
MiMo-V2-Flash
Xiaomi·Open Weight·256K

62

prov. overall

16
DeepSeek V3.2
DeepSeek·Open Weight·128K

60

prov. overall

17
Qwen3.5-35B-A3B
Alibaba·Open Weight·262K

59

prov. overall

18
Gemma 4 26B A4B
Google·Open Weight·256K

58

prov. overall

19
DeepSeek Coder 2.0
DeepSeek·Open Weight·128K

53

prov. overall

20
DeepSeek LLM 2.0
DeepSeek·Open Weight·128K

53

prov. overall

21
Qwen2.5-1M
Alibaba·Open Weight·1M

53

prov. overall

22
DeepSeekMath V2
DeepSeek·Open Weight·128K

52

prov. overall

23
Qwen2.5-72B
Alibaba·Open Weight·128K

52

prov. overall

24
Nemotron 3 Ultra 500B
NVIDIA·Open Weight·10M

48

prov. overall

25
Qwen3 235B 2507 (Reasoning)
Alibaba·Open Weight·128K

48

prov. overall

26
Nemotron 3 Super 100B
NVIDIA·Open Weight·1M

46

prov. overall

27
Llama 3.1 405B
Meta·Open Weight·128K

43

prov. overall

28
Sarvam 105B
Sarvam·Open Weight·128K

41

prov. overall

29
GPT-OSS 120B
OpenAI·Open Weight·128K

38

prov. overall

30
DeepSeek V3
DeepSeek·Open Weight·128K

37

prov. overall

31
DeepSeek-R1
DeepSeek·Open Weight·128K

35

prov. overall

32
Qwen3 235B 2507
Alibaba·Open Weight·128K

35

prov. overall

33
DBRX Instruct
Databricks·Open Weight·32K

33

prov. overall

34
DeepSeek V3.1 (Reasoning)
DeepSeek·Open Weight·128K

32

prov. overall

35
Phi-4
Microsoft·Open Weight·16K

29

prov. overall

36
DeepSeek V3.1
DeepSeek·Open Weight·128K

28

prov. overall

37
Llama 3 70B
Meta·Open Weight·128K

28

prov. overall

38
Nemotron 3 Nano 30B
NVIDIA·Open Weight·32K

27

prov. overall

39
Mistral 8x7B
Mistral·Open Weight·32K

25

prov. overall

40
Llama 4 Scout
Meta·Open Weight·10M

24

prov. overall

41
Mixtral 8x22B Instruct v0.1
Mistral·Open Weight·64K

24

prov. overall

42
Nemotron-4 15B
NVIDIA·Open Weight·32K

24

prov. overall

43
Nemotron Ultra 253B
NVIDIA·Open Weight·32K

23

prov. overall

44
GPT-OSS 20B
OpenAI·Open Weight·128K

19

prov. overall

45
Gemma 3 27B
Google·Open Weight·32K

18

prov. overall

46
Llama 4 Maverick
Meta·Open Weight·1M

18

prov. overall

47
Llama 4 Behemoth
Meta·Open Weight·32K

12

prov. overall

48
Mistral 7B v0.3
Mistral·Open Weight·32K

5

prov. overall

49
Mistral 8x7B v0.2
Mistral·Open Weight·32K

2

prov. overall

These rankings update weekly

Get notified when models move. One email a week with what changed and why.

Free. No spam. Unsubscribe anytime.

Key Takeaways

The top model is GLM-5.1 by Z.AI with a provisional score of 84.

The best open-weight model is GLM-5.1 at position #1.

49 models are included in this ranking.

Score in Context

What these scores mean

Open-weight models are ranked by the same overall BenchLM score as proprietary ones. The gap has closed significantly — the best open models score within 5-10 points of the top closed APIs.

Known limitations

Open-weight models typically have smaller context windows (128K vs 1M+), which matters for long-document and agentic tasks. Self-hosting costs (GPU, inference optimization) are not reflected in benchmark scores.

Last updated: April 20, 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.