Skip to main content

Best Large Context Window LLMs in 2026

AI models with the largest context windows (200K+ tokens), ranked by benchmark performance.

Unless noted otherwise, ranking surfaces on this page use BenchLM's provisional leaderboard lane rather than the stricter sourced-only verified leaderboard.

Bottom line: A large context window means nothing if the model can't actually use it. Claude Mythos Preview and Gemini 3.1 Pro both have 1M+ context and the benchmarks to back it up.

According to BenchLM.ai, Claude Mythos Preview leads this ranking with a score of 99, followed by Gemini 3.1 Pro (94) and Claude Opus 4.7 (94). There is meaningful separation between the top models, suggesting genuine performance differences.

The best open-weight option is GLM-5.1 (ranked #10 with a score of 84). While proprietary models lead, open-weight options are within striking distance for teams willing to trade a few points of performance for full model control.

This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.

What changed

Claude Mythos Preview leads large-context models with 1M context and the highest overall score.

Gemini 3.1 Pro 1M context with strong reasoning (97) — best non-reasoning large-context model.

GPT-5.4 1.05M context — largest window among the top 3 overall models.

How to choose

Full Rankings (61 models)

Claude Mythos Preview
Anthropic·Proprietary·1M

99

prov. overall

Gemini 3.1 Pro
Google·Proprietary·1M

94

prov. overall

Claude Opus 4.7
Anthropic·Proprietary·1M

94

prov. overall

4
GPT-5.4
OpenAI·Proprietary·1.05M

93

prov. overall

5
Claude Opus 4.6
Anthropic·Proprietary·1M

92

prov. overall

6
GPT-5.4 Pro
OpenAI·Proprietary·1.05M

92

prov. overall

7
GPT-5.3 Codex
OpenAI·Proprietary·400K

89

prov. overall

8
Gemini 3 Pro Deep Think
Google·Proprietary·2M

87

prov. overall

9
Claude Sonnet 4.6
Anthropic·Proprietary·200K

86

prov. overall

10
GLM-5.1
Z.AI·Open Weight·203K

84

prov. overall

11
GLM-5 (Reasoning)
Z.AI·Open Weight·200K

84

prov. overall

12
Gemini 3 Pro
Google·Proprietary·2M

83

prov. overall

13
GPT-5.2
OpenAI·Proprietary·400K

83

prov. overall

14
Claude Opus 4.5
Anthropic·Proprietary·200K

80

prov. overall

15
GPT-5.1
OpenAI·Proprietary·200K

80

prov. overall

16
GPT-5.2-Codex
OpenAI·Proprietary·400K

80

prov. overall

17
Grok 4.1
xAI·Proprietary·1M

80

prov. overall

18
GPT-5.1-Codex-Max
OpenAI·Proprietary·400K

79

prov. overall

19
GLM-5
Z.AI·Open Weight·200K

77

prov. overall

20
Qwen3.6 Plus
Alibaba·Proprietary·1M

77

prov. overall

21
Grok 4.20
xAI·Proprietary·2M

77

prov. overall

22
GPT-5.4 mini
OpenAI·Proprietary·400K

73

prov. overall

23
GLM-4.7
Z.AI·Open Weight·200K

72

prov. overall

24
Grok 4.1 Fast
xAI·Proprietary·1M

72

prov. overall

25
Qwen3.5-122B-A10B
Alibaba·Open Weight·262K

68

prov. overall

26
Claude Sonnet 4.5
Anthropic·Proprietary·200K

68

prov. overall

27
o1-preview
OpenAI·Proprietary·200K

68

prov. overall

28
Gemini 2.5 Pro
Google·Proprietary·1M

67

prov. overall

29
Gemini 3 Flash
Google·Proprietary·1M

67

prov. overall

30
Gemma 4 31B
Google·Open Weight·256K

67

prov. overall

31
MiniMax M2.7
MiniMax·Open Weight·200K

65

prov. overall

32
Qwen3.5-27B
Alibaba·Open Weight·262K

65

prov. overall

33
Qwen3.6-35B-A3B
Alibaba·Open Weight·262K

64

prov. overall

34
MiMo-V2-Flash
Xiaomi·Open Weight·256K

63

prov. overall

35
Claude Haiku 4.5
Anthropic·Proprietary·200K

60

prov. overall

36
GPT-4.1
OpenAI·Proprietary·1M

60

prov. overall

37
o3
OpenAI·Proprietary·200K

60

prov. overall

38
Qwen3.5-35B-A3B
Alibaba·Open Weight·262K

59

prov. overall

39
o1
OpenAI·Proprietary·200K

59

prov. overall

40
o3-pro
OpenAI·Proprietary·200K

59

prov. overall

41
Gemma 4 26B A4B
Google·Open Weight·256K

58

prov. overall

42
o3-mini
OpenAI·Proprietary·200K

58

prov. overall

43
Claude 4.1 Opus
Anthropic·Proprietary·200K

53

prov. overall

44
Qwen2.5-1M
Alibaba·Open Weight·1M

53

prov. overall

45
Claude 4 Sonnet
Anthropic·Proprietary·200K

52

prov. overall

46
Gemini 3.1 Flash-Lite
Google·Proprietary·1M

51

prov. overall

47
Nemotron 3 Ultra 500B
NVIDIA·Open Weight·10M

48

prov. overall

48
GPT-4.1 mini
OpenAI·Proprietary·1M

47

prov. overall

49
Nemotron 3 Super 100B
NVIDIA·Open Weight·1M

46

prov. overall

50
o4-mini (high)
OpenAI·Proprietary·200K

46

prov. overall

51
Claude 4.1 Opus Thinking
Anthropic·Proprietary·200K

45

prov. overall

52
Claude 3.5 Sonnet
Anthropic·Proprietary·200K

42

prov. overall

53
Grok Code Fast 1
xAI·Proprietary·256K

42

prov. overall

54
Gemini 2.5 Flash
Google·Proprietary·1M

40

prov. overall

55
Gemini 1.5 Pro
Google·Proprietary·2M

38

prov. overall

56
Claude 3 Opus
Anthropic·Proprietary·200K

37

prov. overall

57
o1-pro
OpenAI·Proprietary·200K

30

prov. overall

58
GPT-4.1 nano
OpenAI·Proprietary·1M

28

prov. overall

59
Claude 3 Haiku
Anthropic·Proprietary·200K

24

prov. overall

60
Llama 4 Scout
Meta·Open Weight·10M

24

prov. overall

61
Llama 4 Maverick
Meta·Open Weight·1M

18

prov. overall

These rankings update weekly

Get notified when models move. One email a week with what changed and why.

Free. No spam. Unsubscribe anytime.

Key Takeaways

The top model is Claude Mythos Preview by Anthropic with a provisional score of 99.

The best open-weight model is GLM-5.1 at position #10.

61 models are included in this ranking.

Score in Context

What these scores mean

Models are filtered by context window (200K+ tokens) and ranked by overall BenchLM score. A large context window alone is not enough — check long-context benchmark scores for actual retrieval and reasoning quality.

Known limitations

Context window size is self-reported by providers. Actual usable context may be smaller due to edge degradation. Long-context benchmarks test specific patterns — real workloads may differ.

Last updated: April 16, 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.