Skip to main content
Skip to main content
194 models · 150 benchmarks

Compare frontier AI models by quality, cost, and context

109 provisional-ranked models, 13 verified-ranked models, and 194 tracked LLMs. The most comprehensive LLM comparison tool — 150 benchmarks, real pricing, and runtime data in one place.

The BenchLM LLM leaderboard 2026 provisionally ranks 109+ models and tracks 194+ large language models side by side across 150 benchmarks — from SWE-bench and LiveCodeBench for coding to GPQA Diamond and MMLU-Pro for knowledge and reasoning. Whether you need the best AI models 2026 has to offer for agentic workflows, math, multilingual tasks, or instruction following, our AI benchmark comparison tables make it easy to see how GPT-5, Claude, Gemini, DeepSeek, Llama, and dozens of other frontier and open-source models stack up on both benchmarks and operator tradeoffs like price and context. The main leaderboard now distinguishes provisional ranking from verified ranking so you can see which scores rest on exact-source coverage and which still rely on source-unverified public rows.

Compare models instantly

vs

Decision-ready picks

The fastest way to scan the current BenchLM dataset by outcome instead of just by benchmark.

The AI Race
Explore timeline
Current Crown(model released this month)

Claude Mythos Preview

Anthropic

99

Provider Podium

1st
Anthropic94.7
2nd
OpenAI91.7
3rd
Google88
6 months tracked94 total releases5 crown changes

Unified Model Leaderboard

Benchmarks, pricing, runtime signals, and context window in one table. Filter state syncs to the URL so every view is shareable. Provisional-ranked mode includes source-unverified non-generated benchmark evidence.

194 models
Provisional-ranked mode includes source-unverified non-generated benchmark evidence.Score confidence:Full sourced coverageGood sourced coverageLimited sourced coverageEstimated
AnthropicClosedCurrentReasoning1M$25.00 / $125.00N/AN/A99100100989910090
2
GPT-5.4
OpenAI
OpenAIClosedCurrentReasoning1.05M$2.50 / $15.0074151.79s94939193889710094951465.79
GoogleClosedCurrentStandard1M$1.25 / $5.0010929.71s94879697909510093711492.63
4
AnthropicClosedCurrentStandard1M$5.00 / $25.00N/AN/A93909399
5
AnthropicClosedSupersededStandard1M$5.00 / $25.00401.78s92929190849210095891496.61
OpenAIClosedCurrentReasoning1.05M$30.00 / $180.0074151.79s9292939910060941001483.56
OpenAIClosedCurrentReasoning400K$2.50 / $10.007988.26s~898588959594100911001416
GoogleClosedCurrentReasoning2MN/AN/AN/A~87877789100898583961486.39
AnthropicClosedCurrentStandard200K$3.00 / $15.00441.48s8684848395849182781462.21
10
Z.AIOpenCurrentReasoning203K$1.40 / $4.40N/AN/A848385658593891467.44
Z.AIOpenCurrentReasoning200K$0.00 / $0.00N/AN/A~8486768873838281931455.62
12
GPT-5.2
OpenAI
OpenAIClosedCurrentReasoning400K$2.00 / $8.0073130.34s~8466848686939986841439.54
13
GoogleClosedCurrentStandard2MN/A10932.65s~8375758286848279841486.16
xAIClosedSupersededStandard1M$3.00 / $15.00N/AN/A~81736992989410086921460.98
AlibabaOpenCurrentReasoning128K$0.00 / $0.00N/AN/A~8176878259808682921450
16
GPT-5.1
OpenAI
OpenAIClosedCurrentReasoning200K$1.50 / $6.0011157.47s~8180816896838678701438.53
17
AnthropicClosedCurrentStandard200KN/A461.01s8080797072848458951468
18
OpenAIClosedEstablishedReasoning128KN/A8336.28s~8082737892818283721433.37
OpenAIClosedCurrentReasoning400K$2.00 / $8.0012387.34s~8084818989808893981331
Moonshot AIClosedCurrentReasoning128KN/AN/AN/A~79688870717590100681447
OpenAIClosedCurrentReasoning400K$2.00 / $8.00N/AN/A~7981799090808689971349
xAIClosedCurrentReasoning2M$2.00 / $6.0023310.33s7864806968981490.38
23
GLM-5
Z.AI
Z.AIOpenSupersededStandard200K$0.00 / $0.00741.64s7772786356847381891455.57
24
AlibabaClosedCurrentReasoning1M$0.00 / $0.00N/AN/A7771804474768290
OpenAIClosedEstablishedReasoning128KN/A8336.28s~7474817589758778921328
Showing 25 of 194

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.

Scoring Methodology

Each model's overall score is a normalized weighted average of category averages. Within each category, benchmark scores are normalized to a common scale and combined using per-benchmark weights that favor harder, less-saturated evaluations.

Each score includes a confidence indicator (1-4 dots) showing how much sourced benchmark data supports it — models with no non-generated benchmark coverage are marked as estimated.

Display-only benchmarks like MMLU, HumanEval, BBH, LisanBench, FLTEval, and the AIME/HMMT exams remain visible but are excluded from scoring.

Data sourced from OpenBench, official model papers, and public leaderboards. External consensus signals are used as bounded calibration inputs but are not exposed in exported data.

Agentic22%

Terminal-Bench 2.0 · OSWorld-Verified · BrowseComp · GAIA · Tau-Bench · WebArena

Coding20%

SWE-Rebench · SWE-bench Pro · LiveCodeBench · SWE-bench Verified · SciCode

Reasoning17%

LongBench v2 · ARC-AGI-2 · MRCRv2 · MuSR

Multimodal12%

MMMU-Pro · OfficeQA Pro

Knowledge12%

HLE · MMLU-Pro · FrontierScience · SimpleQA · GPQA · SuperGPQA

Multilingual7%

MMLU-ProX · MGSM

Instruction Following5%

IFEval · IFBench

Math5%

FrontierMath · AIME 2025 · BRUMO 2025 · MATH-500