LLM Price vs Performance Chart
Find the most cost-effective AI model. Each dot is an LLM plotted by its provisional benchmark score (higher is better) against output token price (lower is better). Models on the efficiency frontier offer the best value at their price point.
Gemini 3.1 Flash-Lite
Score/$: 127.5 · $0.40/1M out
Claude Mythos Preview
Score: 99 · $125.00/1M out
Gemini 3.1 Flash-Lite
Score: 51 · $0.40/1M out
Top 10 Best Value Models (Overall)
Ranked by Score/$ ratio (benchmark score per dollar of output token cost)
| # | Model | Score | Output $/1M | Score/$ |
|---|---|---|---|---|
| 1 | Gemini 3.1 Flash-Lite | 51 | $0.40 | 127.5 |
| 2 | GPT-4o mini OpenAI | 45 | $0.60 | 75.0 |
| 3 | GPT-4.1 nano OpenAI | 28 | $0.40 | 70.0 |
| 4 | Gemini 2.5 Flash | 40 | $0.60 | 66.7 |
| 5 | MiniMax M2.7 MiniMax | 65 | $1.20 | 54.2 |
| 6 | DeepSeek Coder 2.0 DeepSeek | 53 | $1.10 | 48.2 |
| 7 | DeepSeek V3 DeepSeek | 38 | $1.10 | 34.5 |
| 8 | GPT-4.1 mini OpenAI | 47 | $1.60 | 29.4 |
| 9 | Kimi K2.5 Moonshot AI | 68 | $2.80 | 24.3 |
| 10 | Gemini 3 Flash | 67 | $3.00 | 22.3 |
Compare all LLM API prices side by side
Cost-adjusted coding model rankings
Cost-adjusted agentic model rankings
Frequently Asked Questions
What is the LLM price-performance chart?
This chart plots each AI model by its benchmark score (vertical axis) against its API output price per million tokens (horizontal axis). Models in the upper-left quadrant offer the best value — high performance at low cost. The efficiency frontier line connects the best-value models at each price point.
What is the efficiency frontier?
The efficiency frontier (Pareto frontier) connects models where no other model offers both a higher score and a lower price. Models on this line represent the optimal price-performance tradeoff. If a model is below and to the right of the frontier, there exists a cheaper model with a better score.
Which LLM has the best price-to-performance ratio?
Currently, Gemini 3.1 Flash-Lite by Google offers the best overall value with a Score/$ ratio of 127.5. This means you get 127.5 benchmark points per dollar of output token cost.
How are scores calculated?
Overall scores shown in this chart use BenchLM's provisional ranking lane: a normalized weighted average across 8 benchmark categories, with bounded external calibration. The verified leaderboard is stricter and sourced-only, but this price-performance surface intentionally stays broader so value comparisons cover more models.
The AI models change fast. We track them for you.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.