WebArena Web Agent Benchmark (WebArena)

WebArena is a realistic web environment for evaluating autonomous AI agents on complex, multi-step browser tasks. Agents must navigate e-commerce sites, forums, content management systems, and code repositories to complete practical objectives like purchasing items, finding information, and managing accounts.

How BenchLM shows WebArena right now

BenchLM is tracking WebArena in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.

These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.

15 tracked modelsLocal tracked rowsAwaiting exact-source attachmentsDisplay only

Tracked score on WebArena — April 7, 2026

BenchLM mirrors the published tracked score view for WebArena. Claude Mythos Preview leads the public snapshot at 68.7% , followed by GPT-5.4 Pro (65.8%) and Claude Opus 4.6 (64.5%). BenchLM does not use these results to rank models overall.

15 modelsAgentic8% of category scoreRefreshingUpdated April 7, 2026

The published WebArena snapshot is tightly clustered at the top: Claude Mythos Preview sits at 68.7%, while the third row is only 4.2 points behind. The broader top-10 spread is 16.6 points, so the benchmark still separates strong models even when the leaders cluster.

15 models have been evaluated on WebArena. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. Within that category, WebArena contributes 8% of the category score, so strong performance here directly affects a model's overall ranking.

About WebArena

Year

2024

Tasks

812

BenchLM freshness & provenance

Version

WebArena 2024

Refresh cadence

Annual

Staleness state

Refreshing

Question availability

Public benchmark set

Refreshing

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Tracked score table (15 models)

#1
Claude Mythos Previewclaude-mythos-preview
68.7%
#2
GPT-5.4 Progpt-5-4-pro
65.8%
#3
Claude Opus 4.6claude-opus-4-6
64.5%
#4
GPT-5.4gpt-5-4
62.3%
#5
Claude Sonnet 4.6claude-sonnet-4-6
59.2%
#6
Gemini 3.1 Progemini-3-1-pro
58.4%
#7
Qwen3.6 Plusqwen3-6-plus
57.2%
#8
Qwen3.5 397Bqwen3-5-397b
55.8%
#9
Grok 4.1grok-4-1
53.7%
#10
Gemini 3 Progemini-3-pro
52.1%
#11
Kimi K2.5kimi-k2-5
51.3%
#12
GLM-5 (Reasoning)glm-5-reasoning
49.8%
#13
DeepSeek V3.2 (Thinking)deepseek-v3-2-thinking
48.6%
#14
Llama 4 Behemothllama-4-behemoth
46.2%
#15
o4-mini (high)o4-mini-high
44.5%

FAQ

What does WebArena measure?

WebArena is a realistic web environment for evaluating autonomous AI agents on complex, multi-step browser tasks. Agents must navigate e-commerce sites, forums, content management systems, and code repositories to complete practical objectives like purchasing items, finding information, and managing accounts.

Which model leads the published WebArena snapshot?

Claude Mythos Preview currently leads the published WebArena snapshot with a tracked score of 68.7%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on WebArena?

15 AI models are included in BenchLM's mirrored WebArena snapshot, based on the public leaderboard captured on April 7, 2026.

Last updated: April 7, 2026 · mirrored from the public benchmark leaderboard

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.