LLM Price vs Performance Chart

Find the most cost-effective AI model. Each dot is an LLM plotted by its benchmark score (higher is better) against output token price (lower is better). Models on the efficiency frontier offer the best value at their price point.

Best Value

Gemini 3.1 Flash-Lite

Score/$: 140.0 · $0.40/1M out

Highest Score

GPT-5.4 Pro

Score: 92 · $180.00/1M out

Cheapest Ranked

Gemini 3.1 Flash-Lite

Score: 56 · $0.40/1M out

Score Axis
Source Type
Price Range
Efficiency Frontier

Top 10 Best Value Models (Overall)

Ranked by Score/$ ratio (benchmark score per dollar of output token cost)

#ModelScoreOutput $/1MScore/$
1Gemini 3.1 Flash-Lite

Google

56$0.40140.0
2GPT-4.1 nano

OpenAI

44$0.40110.0
3GPT-4o mini

OpenAI

54$0.6090.0
4Gemini 2.5 Flash

Google

50$0.6083.3
5DeepSeek Coder 2.0

DeepSeek

62$1.1056.4
6GPT-5.4 nano

OpenAI

58$1.2546.4
7DeepSeek V3

DeepSeek

49$1.1044.5
8GPT-4.1 mini

OpenAI

57$1.6035.6
9Kimi K2.5

Moonshot AI

72$2.8025.7
10Gemini 3 Flash

Google

67$3.0022.3

Frequently Asked Questions

What is the LLM price-performance chart?

This chart plots each AI model by its benchmark score (vertical axis) against its API output price per million tokens (horizontal axis). Models in the upper-left quadrant offer the best value — high performance at low cost. The efficiency frontier line connects the best-value models at each price point.

What is the efficiency frontier?

The efficiency frontier (Pareto frontier) connects models where no other model offers both a higher score and a lower price. Models on this line represent the optimal price-performance tradeoff. If a model is below and to the right of the frontier, there exists a cheaper model with a better score.

Which LLM has the best price-to-performance ratio?

Currently, Gemini 3.1 Flash-Lite by Google offers the best overall value with a Score/$ ratio of 140.0. This means you get 140.0 benchmark points per dollar of output token cost.

How are scores calculated?

Overall scores are a normalized weighted average across 8 benchmark categories: agentic (22%), coding (20%), reasoning (17%), knowledge (12%), multimodal (12%), multilingual (7%), instruction following (5%), and math (5%). Category scores use weighted averages of individual benchmarks within each category. Only models with verified benchmark data are included.

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.