Skip to main content
Skip to main content

LLM Price vs Performance Chart

Find the most cost-effective AI model. Each dot is an LLM plotted by its provisional benchmark score (higher is better) against output token price (lower is better). Models on the efficiency frontier offer the best value at their price point.

Best Value

Gemini 3.1 Flash-Lite

Score/$: 127.5 · $0.40/1M out

Highest Score

Claude Mythos Preview

Score: 99 · $125.00/1M out

Cheapest Ranked

Gemini 3.1 Flash-Lite

Score: 51 · $0.40/1M out

Score Axis
Source Type
Price Range
Efficiency Frontier

Top 10 Best Value Models (Overall)

Ranked by Score/$ ratio (benchmark score per dollar of output token cost)

#ModelScoreOutput $/1MScore/$
1Gemini 3.1 Flash-Lite

Google

51$0.40127.5
2GPT-4o mini

OpenAI

45$0.6075.0
3GPT-4.1 nano

OpenAI

28$0.4070.0
4Gemini 2.5 Flash

Google

40$0.6066.7
5MiniMax M2.7

MiniMax

65$1.2054.2
6DeepSeek Coder 2.0

DeepSeek

53$1.1048.2
7DeepSeek V3

DeepSeek

38$1.1034.5
8GPT-4.1 mini

OpenAI

47$1.6029.4
9Kimi K2.5

Moonshot AI

68$2.8024.3
10Gemini 3 Flash

Google

67$3.0022.3

Frequently Asked Questions

What is the LLM price-performance chart?

This chart plots each AI model by its benchmark score (vertical axis) against its API output price per million tokens (horizontal axis). Models in the upper-left quadrant offer the best value — high performance at low cost. The efficiency frontier line connects the best-value models at each price point.

What is the efficiency frontier?

The efficiency frontier (Pareto frontier) connects models where no other model offers both a higher score and a lower price. Models on this line represent the optimal price-performance tradeoff. If a model is below and to the right of the frontier, there exists a cheaper model with a better score.

Which LLM has the best price-to-performance ratio?

Currently, Gemini 3.1 Flash-Lite by Google offers the best overall value with a Score/$ ratio of 127.5. This means you get 127.5 benchmark points per dollar of output token cost.

How are scores calculated?

Overall scores shown in this chart use BenchLM's provisional ranking lane: a normalized weighted average across 8 benchmark categories, with bounded external calibration. The verified leaderboard is stricter and sourced-only, but this price-performance surface intentionally stays broader so value comparisons cover more models.

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.