Skip to main content

Best Large Context Window LLMs in 2026

AI models with the largest context windows (200K+ tokens), ranked by benchmark performance.

Unless noted otherwise, ranking surfaces on this page use BenchLM's provisional leaderboard lane rather than the stricter sourced-only verified leaderboard.

Bottom line: A large context window means nothing if the model can't actually use it. Claude Mythos Preview and Gemini 3.1 Pro both have 1M+ context and the benchmarks to back it up.

According to BenchLM.ai, Claude Mythos Preview leads this ranking with a score of 99, followed by Claude Opus 4.7 (97) and GPT-5.4 (93). There is meaningful separation between the top models, suggesting genuine performance differences.

The best open-weight option is GLM-5.1 (ranked #10 with a score of 84). While proprietary models lead, open-weight options are within striking distance for teams willing to trade a few points of performance for full model control.

This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.

What changed

Claude Mythos Preview leads large-context models with 1M context and the highest overall score.

Gemini 3.1 Pro 1M context with strong reasoning (97) — best non-reasoning large-context model.

GPT-5.4 1.05M context — largest window among the top 3 overall models.

How to choose

Full Rankings (63 models)

Claude Mythos Preview
Anthropic·Proprietary·1M

99

prov. overall

Claude Opus 4.7
Anthropic·Proprietary·1M

97

prov. overall

GPT-5.4
OpenAI·Proprietary·1.05M

93

prov. overall

4
Gemini 3.1 Pro
Google·Proprietary·1M

93

prov. overall

5
GPT-5.4 Pro
OpenAI·Proprietary·1.05M

92

prov. overall

6
Claude Opus 4.6
Anthropic·Proprietary·1M

91

prov. overall

7
GPT-5.3 Codex
OpenAI·Proprietary·400K

89

prov. overall

8
Gemini 3 Pro Deep Think
Google·Proprietary·2M

86

prov. overall

9
Claude Sonnet 4.6
Anthropic·Proprietary·200K

85

prov. overall

10
GLM-5.1
Z.AI·Open Weight·203K

84

prov. overall

11
GLM-5 (Reasoning)
Z.AI·Open Weight·200K

84

prov. overall

12
GPT-5.2
OpenAI·Proprietary·400K

83

prov. overall

13
Kimi 2.6
Moonshot AI·Open Weight·256K

83

prov. overall

14
Gemini 3 Pro
Google·Proprietary·2M

83

prov. overall

15
Claude Opus 4.5
Anthropic·Proprietary·200K

80

prov. overall

16
GPT-5.1
OpenAI·Proprietary·200K

80

prov. overall

17
GPT-5.2-Codex
OpenAI·Proprietary·400K

80

prov. overall

18
Grok 4.1
xAI·Proprietary·1M

80

prov. overall

19
GPT-5.1-Codex-Max
OpenAI·Proprietary·400K

78

prov. overall

20
GLM-5
Z.AI·Open Weight·200K

77

prov. overall

21
Qwen3.6 Plus
Alibaba·Proprietary·1M

77

prov. overall

22
Grok 4.20
xAI·Proprietary·2M

77

prov. overall

23
GPT-5.4 mini
OpenAI·Proprietary·400K

73

prov. overall

24
Grok 4.1 Fast
xAI·Proprietary·1M

72

prov. overall

25
GLM-4.7
Z.AI·Open Weight·200K

71

prov. overall

26
Qwen3.6-35B-A3B
Alibaba·Open Weight·262K

70

prov. overall

27
Kimi K2.5
Moonshot AI·Open Weight·256K

68

prov. overall

28
Qwen3.5-122B-A10B
Alibaba·Open Weight·262K

68

prov. overall

29
Claude Sonnet 4.5
Anthropic·Proprietary·200K

68

prov. overall

30
o1-preview
OpenAI·Proprietary·200K

68

prov. overall

31
Gemini 2.5 Pro
Google·Proprietary·1M

67

prov. overall

32
Gemini 3 Flash
Google·Proprietary·1M

67

prov. overall

33
Gemma 4 31B
Google·Open Weight·256K

66

prov. overall

34
Qwen3.5-27B
Alibaba·Open Weight·262K

65

prov. overall

35
MiniMax M2.7
MiniMax·Open Weight·200K

64

prov. overall

36
MiMo-V2-Flash
Xiaomi·Open Weight·256K

62

prov. overall

37
GPT-4.1
OpenAI·Proprietary·1M

60

prov. overall

38
Qwen3.5-35B-A3B
Alibaba·Open Weight·262K

59

prov. overall

39
Claude Haiku 4.5
Anthropic·Proprietary·200K

59

prov. overall

40
o1
OpenAI·Proprietary·200K

59

prov. overall

41
o3
OpenAI·Proprietary·200K

59

prov. overall

42
o3-pro
OpenAI·Proprietary·200K

59

prov. overall

43
Gemma 4 26B A4B
Google·Open Weight·256K

58

prov. overall

44
o3-mini
OpenAI·Proprietary·200K

58

prov. overall

45
Claude 4.1 Opus
Anthropic·Proprietary·200K

53

prov. overall

46
Qwen2.5-1M
Alibaba·Open Weight·1M

53

prov. overall

47
Claude 4 Sonnet
Anthropic·Proprietary·200K

52

prov. overall

48
Gemini 3.1 Flash-Lite
Google·Proprietary·1M

51

prov. overall

49
Nemotron 3 Ultra 500B
NVIDIA·Open Weight·10M

48

prov. overall

50
GPT-4.1 mini
OpenAI·Proprietary·1M

47

prov. overall

51
Nemotron 3 Super 100B
NVIDIA·Open Weight·1M

46

prov. overall

52
o4-mini (high)
OpenAI·Proprietary·200K

46

prov. overall

53
Claude 4.1 Opus Thinking
Anthropic·Proprietary·200K

45

prov. overall

54
Claude 3.5 Sonnet
Anthropic·Proprietary·200K

42

prov. overall

55
Grok Code Fast 1
xAI·Proprietary·256K

42

prov. overall

56
Gemini 2.5 Flash
Google·Proprietary·1M

40

prov. overall

57
Gemini 1.5 Pro
Google·Proprietary·2M

37

prov. overall

58
Claude 3 Opus
Anthropic·Proprietary·200K

36

prov. overall

59
o1-pro
OpenAI·Proprietary·200K

30

prov. overall

60
GPT-4.1 nano
OpenAI·Proprietary·1M

28

prov. overall

61
Claude 3 Haiku
Anthropic·Proprietary·200K

24

prov. overall

62
Llama 4 Scout
Meta·Open Weight·10M

24

prov. overall

63
Llama 4 Maverick
Meta·Open Weight·1M

18

prov. overall

These rankings update weekly

Get notified when models move. One email a week with what changed and why.

Free. No spam. Unsubscribe anytime.

Key Takeaways

The top model is Claude Mythos Preview by Anthropic with a provisional score of 99.

The best open-weight model is GLM-5.1 at position #10.

63 models are included in this ranking.

Score in Context

What these scores mean

Models are filtered by context window (200K+ tokens) and ranked by overall BenchLM score. A large context window alone is not enough — check long-context benchmark scores for actual retrieval and reasoning quality.

Known limitations

Context window size is self-reported by providers. Actual usable context may be smaller due to edge degradation. Long-context benchmarks test specific patterns — real workloads may differ.

Last updated: April 20, 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.