A benchmark for web-browsing agents that must search, inspect sources, gather evidence, and return the correct answer to research-oriented questions.
As of April 29, 2026, GPT-5.5 Pro leads the BrowseComp leaderboard with 90.1% , followed by GPT-5.4 Pro (89.3%) and Claude Mythos Preview (86.9%).
GPT-5.5 Pro
OpenAI
GPT-5.4 Pro
OpenAI
Claude Mythos Preview
Anthropic
According to BenchLM.ai, GPT-5.5 Pro leads the BrowseComp benchmark with a score of 90.1%, followed by GPT-5.4 Pro (89.3%) and Claude Mythos Preview (86.9%). The scores show moderate spread, with meaningful differences between the top tier and mid-tier models.
21 models have been evaluated on BrowseComp. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. Within that category, BrowseComp contributes 18% of the category score, so strong performance here directly affects a model's overall ranking.
Year
2025
Tasks
Research questions requiring browsing
Format
Web search and evidence synthesis
Difficulty
Hard web research
BrowseComp is designed to measure real web research behavior, not just latent world knowledge. It rewards models that can plan searches, inspect multiple pages, and avoid shallow answer synthesis.
Version
BrowseComp 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A benchmark for web-browsing agents that must search, inspect sources, gather evidence, and return the correct answer to research-oriented questions.
GPT-5.5 Pro by OpenAI currently leads with a score of 90.1% on BrowseComp.
21 AI models have been evaluated on BrowseComp on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.