This reporting page isolates the web research slice of agentic performance. It prioritizes sourced benchmarks for browsing, evidence gathering, and multi-step web task completion rather than generic overall agent scores.
This page ranks models using only sourced web research benchmarks in the reporting family.
According to BenchLM.ai, GPT-5.4 Pro leads this ranking with a score of 89.3, followed by Claude Mythos Preview (86.9) and Claude Opus 4.6 (83.7). There is meaningful separation between the top models, suggesting genuine performance differences.
The best open-weight option is GLM-5.1 (ranked #5 with a score of 68). While proprietary models lead, open-weight options are within striking distance for teams willing to trade a few points of performance for full model control.
This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.
GPT-5.4 Pro
OpenAI · 1.05M
Claude Mythos Preview
Anthropic · 1M
Claude Opus 4.6
Anthropic · 1M
The top model on this sourced reporting-family slice is GPT-5.4 Pro by OpenAI with an average of 89.3.
The best open-weight model is GLM-5.1 at position #5.
13 models are listed with sourced benchmark coverage in this reporting family.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.