A research-focused FrontierScience evaluation variant for scientific investigation and problem solving.
BenchLM mirrors the published score view for FrontierScience Research. GPT-5.4 Pro leads the public snapshot at 36.7%. BenchLM does not use these results to rank models overall.
Year
2026
Tasks
Scientific research problems
Format
Research evaluation
Difficulty
Frontier scientific research
Meta uses FrontierScience Research in its Contemplating-mode comparison table as a distinct scientific research variant. BenchLM stores it as a display-only frontier science reference.
Version
FrontierScience Research 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A research-focused FrontierScience evaluation variant for scientific investigation and problem solving.
GPT-5.4 Pro by OpenAI currently leads with a score of 36.7% on FrontierScience Research.
1 AI models have been evaluated on FrontierScience Research on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.