188 models · 150 benchmarks

Compare frontier AI models by quality, cost, and context

106 provisional-ranked models, 11 verified-ranked models, and 188 tracked LLMs. The most comprehensive LLM comparison tool — 150 benchmarks, real pricing, and runtime data in one place.

The BenchLM LLM leaderboard 2026 provisionally ranks 106+ models and tracks 188+ large language models side by side across 150 benchmarks — from SWE-bench and LiveCodeBench for coding to GPQA Diamond and MMLU-Pro for knowledge and reasoning. Whether you need the best AI models 2026 has to offer for agentic workflows, math, multilingual tasks, or instruction following, our AI benchmark comparison tables make it easy to see how GPT-5, Claude, Gemini, DeepSeek, Llama, and dozens of other frontier and open-source models stack up on both benchmarks and operator tradeoffs like price and context. The main leaderboard now distinguishes provisional ranking from verified ranking so you can see which scores rest on exact-source coverage and which still rely on source-unverified public rows.

Compare models instantly

vs

Decision-ready picks

The fastest way to scan the current BenchLM dataset by outcome instead of just by benchmark.

The AI Race
Explore timeline
Current Crown(model released this month)

Claude Mythos Preview

Anthropic

99

Provider Podium

1st
Anthropic92.3
2nd
OpenAI91.7
3rd
Google88
6 months tracked89 total releases5 crown changes

Unified Model Leaderboard

Benchmarks, pricing, runtime signals, and context window in one table. Filter state syncs to the URL so every view is shareable. Provisional-ranked mode includes source-unverified non-generated benchmark evidence.

188 models
Provisional-ranked mode includes source-unverified non-generated benchmark evidence.Score confidence:Full sourced coverageGood sourced coverageLimited sourced coverageEstimated
AnthropicClosedCurrentReasoning1M$25.00 / $125.00N/AN/A99100100989910090
2
GPT-5.4
OpenAI
OpenAIClosedCurrentReasoning1.05M$2.50 / $15.0074151.79s94949193889810094951465.79
GoogleClosedCurrentStandard1M$1.25 / $5.0010929.71s94889497909610093711492.63
4
AnthropicClosedCurrentStandard1M$15.00 / $75.00401.78s92939190849210095891496.61
OpenAIClosedCurrentReasoning1.05M$30.00 / $180.0074151.79s9292939910060941001483.56
OpenAIClosedCurrentReasoning400K$2.50 / $10.007988.26s~898688959594100911001416
GoogleClosedCurrentReasoning2MN/AN/AN/A~87887789100898583961486.39
AnthropicClosedCurrentStandard200K$3.00 / $15.00441.48s8685838395859182781462.21
Z.AIOpenCurrentReasoning200K$0.00 / $0.00N/AN/A~8586768873848281931455.62
10
Z.AIOpenCurrentReasoning203K$1.40 / $4.40N/AN/A848383658593891467.44
11
GPT-5.2
OpenAI
OpenAIClosedCurrentReasoning400K$2.00 / $8.0073130.34s~8466848686939986841439.54
12
GoogleClosedCurrentStandard2MN/A10932.65s~8376758286848279841486.16
xAIClosedSupersededStandard1M$3.00 / $15.00N/AN/A~81736992989510086921460.98
AlibabaOpenCurrentReasoning128K$0.00 / $0.00N/AN/A~8177858259808682921450
15
GPT-5.1
OpenAI
OpenAIClosedCurrentReasoning200K$1.50 / $6.0011157.47s~8181816896848678701438.53
16
AnthropicClosedCurrentStandard200KN/A461.01s8081797072848458951468
17
OpenAIClosedEstablishedReasoning128KN/A8336.28s~8082737892818283721433.37
OpenAIClosedCurrentReasoning400K$2.00 / $8.0012387.34s~8084818989808893981331
Moonshot AIClosedCurrentReasoning128KN/AN/AN/A~79698770717590100681447
OpenAIClosedCurrentReasoning400K$2.00 / $8.00N/AN/A~7981799090818689971349
xAIClosedCurrentReasoning2M$2.00 / $6.0023310.33s7864806968981490.38
22
GLM-5
Z.AI
Z.AIOpenSupersededStandard200K$0.00 / $0.00741.64s7773776356857381891455.57
23
AlibabaClosedCurrentReasoning1M$0.00 / $0.00N/AN/A7772804474778290
24
GoogleOpenCurrentReasoning256K$0.00 / $0.00N/AN/A~74875571751451.16
OpenAIClosedEstablishedReasoning128KN/A8336.28s~7475817589768778921328
Showing 25 of 188

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.

Scoring Methodology

Each model's overall score is a normalized weighted average of category averages. Within each category, benchmark scores are normalized to a common scale and combined using per-benchmark weights that favor harder, less-saturated evaluations.

Each score includes a confidence indicator (1-4 dots) showing how much sourced benchmark data supports it — models with no non-generated benchmark coverage are marked as estimated.

Display-only benchmarks like MMLU, HumanEval, BBH, LisanBench, FLTEval, and the AIME/HMMT exams remain visible but are excluded from scoring.

Data sourced from OpenBench, official model papers, and public leaderboards. External consensus signals are used as bounded calibration inputs but are not exposed in exported data.

Agentic22%

Terminal-Bench 2.0 · OSWorld-Verified · BrowseComp · GAIA · Tau-Bench · WebArena

Coding20%

SWE-Rebench · SWE-bench Pro · LiveCodeBench · SWE-bench Verified · SciCode

Reasoning17%

LongBench v2 · ARC-AGI-2 · MRCRv2 · MuSR

Multimodal12%

MMMU-Pro · OfficeQA Pro

Knowledge12%

HLE · MMLU-Pro · FrontierScience · SimpleQA · GPQA · SuperGPQA

Multilingual7%

MMLU-ProX · MGSM

Instruction Following5%

IFEval · IFBench

Math5%

FrontierMath · AIME 2025 · BRUMO 2025 · MATH-500