This reporting page focuses on computer-use and GUI-agent behavior: whether a model can read screens, ground actions, and complete software tasks. It is distinct from pure tool calling and distinct from plain multimodal image understanding.
This page ranks models using only sourced computer-use and GUI benchmarks in the reporting family.
According to BenchLM.ai, Gemini 3.1 Pro leads this ranking with a score of 84.4, followed by Muse Spark (84.1) and Claude Mythos Preview (79.6). There is meaningful separation between the top models, suggesting genuine performance differences.
The best open-weight option is Holo3-35B-A3B (ranked #6 with a score of 77.8). While proprietary models lead, open-weight options are within striking distance for teams willing to trade a few points of performance for full model control.
This ranking is based on provisional overall weighted scores across BenchLM.ai's scoring formula tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.
Gemini 3.1 Pro
Google · 1M
Muse Spark
Meta · 262K
Claude Mythos Preview
Anthropic · 1M
The top model on this sourced reporting-family slice is Gemini 3.1 Pro by Google with an average of 84.4.
The best open-weight model is Holo3-35B-A3B at position #6.
24 models are listed with sourced benchmark coverage in this reporting family.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.