A scientific chart reasoning benchmark that tests whether models can understand, interpret, and reason about complex scientific visualizations including plots, diagrams, and data charts.
BenchLM mirrors the published score view for CharXiv. Claude Mythos Preview leads the public snapshot at 93.2%. BenchLM does not use these results to rank models overall.
Year
2024
Tasks
Scientific chart reasoning
Format
Chart understanding and reasoning
Difficulty
Scientific visualization reasoning
CharXiv evaluates a model's ability to reason about real-world scientific charts rather than simple visual QA. With-tools and without-tools variants isolate raw visual reasoning from tool-augmented performance.
Version
CharXiv 2024
Refresh cadence
Annual
Staleness state
Refreshing
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A scientific chart reasoning benchmark that tests whether models can understand, interpret, and reason about complex scientific visualizations including plots, diagrams, and data charts.
Claude Mythos Preview by Anthropic currently leads with a score of 93.2% on CharXiv.
1 AI models have been evaluated on CharXiv on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.