A grounded multimodal factuality benchmark for evidence-linked answer correctness.
BenchLM is tracking Facts-VLM in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.
These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.
BenchLM mirrors the published tracked score view for Facts-VLM. GLM-5V-Turbo leads the public snapshot at 58.6% , followed by Kimi K2.5 (57.8%). BenchLM does not use these results to rank models overall.
Year
2026
Tasks
Grounded factuality tasks
Format
Evidence-linked multimodal factuality
Difficulty
Grounded multimodal factuality
BenchLM stores Facts-VLM as a display-only benchmark reference when exact provider tables are available.
Version
Facts-VLM 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A grounded multimodal factuality benchmark for evidence-linked answer correctness.
GLM-5V-Turbo currently leads the published Facts-VLM snapshot with a tracked score of 58.6%. BenchLM shows this benchmark for display only and does not use it in overall rankings.
2 AI models are included in BenchLM's mirrored Facts-VLM snapshot, based on the public leaderboard captured on April 16, 2026.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.