Skip to main content
Skip to main content
200 models · 152 benchmarks

Compare frontier AI models by quality, cost, and context

111 provisional-ranked models, 15 verified-ranked models, and 200 tracked LLMs. The most comprehensive LLM comparison tool — 152 benchmarks, real pricing, and runtime data in one place.

The BenchLM LLM leaderboard 2026 provisionally ranks 111+ models and tracks 200+ large language models side by side across 152 benchmarks — from SWE-bench and LiveCodeBench for coding to GPQA Diamond and MMLU-Pro for knowledge and reasoning. Whether you need the best AI models 2026 has to offer for agentic workflows, math, multilingual tasks, or instruction following, our AI benchmark comparison tables make it easy to see how GPT-5, Claude, Gemini, DeepSeek, Llama, and dozens of other frontier and open-source models stack up on both benchmarks and operator tradeoffs like price and context. The main leaderboard now distinguishes provisional ranking from verified ranking so you can see which scores rest on exact-source coverage and which still rely on source-unverified public rows.

Compare models instantly

vs

Decision-ready picks

The fastest way to scan the current BenchLM dataset by outcome instead of just by benchmark.

The AI Race
Explore timeline
Current Crown(model released this month)

Claude Mythos Preview

Anthropic

99

Provider Podium

1st
Anthropic95.7
2nd
OpenAI91.3
3rd
Google88
6 months tracked101 total releases5 crown changes

Unified Model Leaderboard

Benchmarks, pricing, runtime signals, and context window in one table. Filter state syncs to the URL so every view is shareable. Provisional-ranked mode includes source-unverified non-generated benchmark evidence.

200 models
Provisional-ranked mode includes source-unverified non-generated benchmark evidence.Score confidence:Full sourced coverageGood sourced coverageLimited sourced coverageEstimated
AnthropicClosedCurrentReasoning1M$25.00 / $125.00N/AN/A99100100989910090
2
AnthropicClosedCurrentStandard1M$5.00 / $25.00N/AN/A979595991496.78
GoogleClosedCurrentStandard1M$2.00 / $12.0010929.71s94879597909510093711493.02
4
GPT-5.4
OpenAI
OpenAIClosedCurrentReasoning1.05M$2.50 / $15.0074151.79s93929193889710094931466.84
OpenAIClosedCurrentReasoning1.05M$30.00 / $180.0074151.79s9292939910060941001481.91
6
AnthropicClosedSupersededStandard1M$5.00 / $25.00401.78s91929090849310095891496.49
OpenAIClosedCurrentReasoning400K$1.75 / $14.007988.26s~898389959594100911001416
GoogleClosedCurrentReasoning2MN/AN/AN/A~87867788100898583961486.39
AnthropicClosedCurrentStandard200K$3.00 / $15.00441.48s8684848395849182781463.37
10
Z.AIOpenCurrentReasoning203K$1.40 / $4.40N/AN/A848284648593901468.84
Z.AIOpenCurrentReasoning200K$1.00 / $3.20N/AN/A~8485768873838281931455.62
12
GPT-5.2
OpenAI
OpenAIClosedCurrentReasoning400K$1.75 / $14.0073130.34s8365838586939985841439.16
13
Kimi 2.6
Moonshot AI
Moonshot AIOpenCurrentReasoning256K$0.95 / $4.00N/AN/A8387907575
14
GoogleClosedCurrentStandard2M$2.00 / $12.0010932.65s~8375758386848279831486.1
15
AnthropicClosedCurrentStandard200K$5.00 / $25.00461.01s8079797072848459951468
xAIClosedSupersededStandard1MN/AN/AN/A~80726892989410086921460.67
AlibabaOpenCurrentReasoning128K$0.60 / $3.60N/AN/A~8075878259808682921450
18
GPT-5.1
OpenAI
OpenAIClosedCurrentReasoning200K$1.25 / $10.0011157.47s~8079816896838677701438.72
19
OpenAIClosedEstablishedReasoning128K$1.25 / $10.008336.28s~7981737792818283721433.49
OpenAIClosedCurrentReasoning400K$1.75 / $14.0012387.34s~7983818889808893981331
Moonshot AIClosedCurrentReasoning128K$0.60 / $3.00N/AN/A~78688870717590100681447
OpenAIClosedCurrentReasoning400K$1.25 / $10.00N/AN/A~7880799090808689971349
23
GLM-5
Z.AI
Z.AIOpenSupersededStandard200K$1.00 / $3.20741.64s7772786356847381881457.05
xAIClosedCurrentReasoning2M$2.00 / $6.0023310.33s7763796768971482.32
25
AlibabaClosedCurrentReasoning1MN/AN/AN/A76747947747782861447.74
Showing 25 of 200

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.

Scoring Methodology

Each model's overall score is a normalized weighted average of category averages. Within each category, benchmark scores are normalized to a common scale and combined using per-benchmark weights that favor harder, less-saturated evaluations.

Each score includes a confidence indicator (1-4 dots) showing how much sourced benchmark data supports it — models with no non-generated benchmark coverage are marked as estimated.

Display-only benchmarks like MMLU, HumanEval, BBH, LisanBench, FLTEval, and the AIME/HMMT exams remain visible but are excluded from scoring.

Data sourced from OpenBench, official model papers, and public leaderboards. External consensus signals are used as bounded calibration inputs but are not exposed in exported data.

Agentic22%

Terminal-Bench 2.0 · OSWorld-Verified · BrowseComp · GAIA · Tau-Bench · WebArena

Coding20%

SWE-Rebench · SWE-bench Pro · LiveCodeBench · SWE-bench Verified · SciCode

Reasoning17%

LongBench v2 · ARC-AGI-2 · MRCRv2 · MuSR

Multimodal12%

MMMU-Pro · OfficeQA Pro

Knowledge12%

HLE · MMLU-Pro · FrontierScience · SimpleQA · GPQA · SuperGPQA

Multilingual7%

MMLU-ProX · MGSM

Instruction Following5%

IFEval · IFBench

Math5%

FrontierMath · AIME 2025 · BRUMO 2025 · MATH-500