Skip to main content

Best Value LLM for Coding in 2026 — Cost-Adjusted Rankings

Raw benchmark scores only tell half the story. This ranking divides each model's weighted coding score by its output token price, surfacing models that deliver the most coding capability per dollar spent. A model scoring 70 at $1/1M tokens outranks one scoring 80 at $15/1M tokens here — because for the same budget you get far more coding work done. Use this alongside the standard coding leaderboard to find the sweet spot between performance and cost for your coding workflows.

Unless noted otherwise, ranking surfaces on this page use BenchLM's provisional leaderboard lane rather than the stricter sourced-only verified leaderboard.

Bottom line: Gemini 3.1 Flash-Lite dominates coding value at $0.40/1M output. DeepSeek Coder 2.0 offers the best absolute coding performance per dollar among serious coding models.

According to BenchLM.ai, Gemini 3.1 Flash-Lite leads this ranking with a score of 71.38, followed by DeepSeek Coder 2.0 (56.49) and MiniMax M2.7 (49.73). There is a significant gap between the leading models and the rest of the field.

The best open-weight option is DeepSeek Coder 2.0 (ranked #2 with a score of 56.49). Open-weight models are highly competitive in this category — self-hosting is a viable alternative to proprietary APIs.

This ranking is based on provisional weighted averages across the scoring benchmarks in coding tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.

What changed

Gemini 3.1 Flash-Lite leads coding value — most coding capability per dollar spent.

DeepSeek Coder 2.0 best value among dedicated coding models with strong raw scores.

Gemini 2.5 Flash good coding value with broader model capabilities.

How to choose

Full Rankings (25 models)

Gemini 3.1 Flash-Lite
Google·Proprietary·1M

71.38

Score/$

Score: 28.6 · $0.4/1M

DeepSeek Coder 2.0
DeepSeek·Open Weight·128K

56.49

Score/$

Score: 62.1 · $1.1/1M

MiniMax M2.7
MiniMax·Open Weight·200K

49.73

Score/$

Score: 59.7 · $1.2/1M

4
Gemini 2.5 Flash
Google·Proprietary·1M

31.94

Score/$

Score: 19.2 · $0.6/1M

5
Kimi K2.5
Moonshot AI·Open Weight·128K

29.45

Score/$

Score: 82.5 · $2.8/1M

6
Gemini 3 Flash
Google·Proprietary·1M

19.77

Score/$

Score: 59.3 · $3/1M

7
GLM-5.1
Z.AI·Open Weight·203K

19.3

Score/$

Score: 84.9 · $4.4/1M

8
Gemini 3.1 Pro
Google·Proprietary·1M

19.08

Score/$

Score: 95.4 · $5/1M

9
DeepSeek-R1
DeepSeek·Open Weight·128K

13.58

Score/$

Score: 29.7 · $2.19/1M

10
GPT-5.1
OpenAI·Proprietary·200K

13.47

Score/$

Score: 80.8 · $6/1M

11
Claude Haiku 4.5
Anthropic·Proprietary·200K

10.89

Score/$

Score: 54.5 · $5/1M

12
Gemini 2.5 Pro
Google·Proprietary·1M

10.44

Score/$

Score: 52.2 · $5/1M

13
GPT-5.2
OpenAI·Proprietary·400K

10.42

Score/$

Score: 83.4 · $8/1M

14
GPT-5.2-Codex
OpenAI·Proprietary·400K

10.15

Score/$

Score: 81.2 · $8/1M

15
GPT-5.3 Codex
OpenAI·Proprietary·400K

8.83

Score/$

Score: 88.3 · $10/1M

16
Mistral Large 3
Mistral·Proprietary·128K

6.9

Score/$

Score: 41.4 · $6/1M

17
GPT-5.4
OpenAI·Proprietary·1.05M

6.07

Score/$

Score: 91 · $15/1M

18
Claude Sonnet 4.6
Anthropic·Proprietary·200K

5.62

Score/$

Score: 84.3 · $15/1M

19
Claude Sonnet 4.5
Anthropic·Proprietary·200K

5.29

Score/$

Score: 79.3 · $15/1M

20
Grok 4.1
xAI·Proprietary·1M

4.55

Score/$

Score: 68.2 · $15/1M

21
Claude Opus 4.7
Anthropic·Proprietary·1M

3.7

Score/$

Score: 92.6 · $25/1M

22
Claude Opus 4.6
Anthropic·Proprietary·1M

3.63

Score/$

Score: 90.8 · $25/1M

23
GPT-4o
OpenAI·Proprietary·128K

2.73

Score/$

Score: 27.3 · $10/1M

24
o3
OpenAI·Proprietary·200K

1.69

Score/$

Score: 67.7 · $40/1M

25
Claude Mythos Preview
Anthropic·Proprietary·1M

0.8

Score/$

Score: 100 · $125/1M

These rankings update weekly

Get notified when models move. One email a week with what changed and why.

Free. No spam. Unsubscribe anytime.

Key Takeaways

The best value model is Gemini 3.1 Flash-Lite by Google with a provisional Score/$ ratio of 71.38 (score: 28.6, output: $0.4/1M tokens).

The best open-weight model is DeepSeek Coder 2.0 at position #2.

25 models are included in this ranking.

Score in Context

What these scores mean

Value scores divide the weighted coding score by output token price (per 1M tokens). Higher means more capability per dollar. Models with no listed price are excluded.

Known limitations

Value rankings favor cheap models even if absolute performance is modest. A model scoring half as well at one-tenth the price wins on value — but may not meet your quality bar. Always check raw scores alongside value rankings.

Last updated: April 16, 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.