GAIA evaluates AI models on real-world tasks that are conceptually simple for humans but require multi-step reasoning, web browsing, tool use, and multimodal understanding for AI. Tasks span three difficulty levels and test practical assistant capabilities rather than academic knowledge.
BenchLM is tracking GAIA in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.
These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.
BenchLM mirrors the published tracked score view for GAIA. Claude Mythos Preview leads the public snapshot at 52.3% , followed by GPT-5.4 Pro (50.5%) and GPT-5.4 (48.2%). BenchLM does not use these results to rank models overall.
Claude Mythos Preview
Anthropic
claude-mythos-preview
GPT-5.4 Pro
OpenAI
gpt-5-4-pro
GPT-5.4
OpenAI
gpt-5-4
The published GAIA snapshot is tightly clustered at the top: Claude Mythos Preview sits at 52.3%, while the third row is only 4.1 points behind. The broader top-10 spread is 12.6 points, so the benchmark still separates strong models even when the leaders cluster.
26 models have been evaluated on GAIA. The benchmark falls in the Agentic category. This category carries a 22% weight in BenchLM.ai's overall scoring system. Within that category, GAIA contributes 12% of the category score, so strong performance here directly affects a model's overall ranking.
Version
GAIA 2024
Refresh cadence
Annual
Staleness state
Refreshing
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
GAIA evaluates AI models on real-world tasks that are conceptually simple for humans but require multi-step reasoning, web browsing, tool use, and multimodal understanding for AI. Tasks span three difficulty levels and test practical assistant capabilities rather than academic knowledge.
Claude Mythos Preview currently leads the published GAIA snapshot with a tracked score of 52.3%. BenchLM shows this benchmark for display only and does not use it in overall rankings.
26 AI models are included in BenchLM's mirrored GAIA snapshot, based on the public leaderboard captured on April 7, 2026.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.