Multimodal workloads — processing images, charts, documents, and screenshots — often involve large inputs that drive up token costs quickly. This ranking divides each model's weighted multimodal score (MMMU-Pro, OfficeQA-Pro) by output token price. For document processing pipelines and visual AI applications running at scale, the value leaders here offer the best multimodal reasoning per dollar spent.
Unless noted otherwise, ranking surfaces on this page use BenchLM's provisional leaderboard lane rather than the stricter sourced-only verified leaderboard.
Bottom line: Multimodal inputs are large and expensive. Gemini 3.1 Flash-Lite dominates value with strong multimodal scores at the lowest price point.
According to BenchLM.ai, Gemini 3.1 Flash-Lite leads this ranking with a score of 156.75, followed by GPT-4.1 nano (96.55) and Gemini 2.5 Flash (88.27). There is a significant gap between the leading models and the rest of the field.
The best open-weight option is DeepSeek Coder 2.0 (ranked #6 with a score of 34.04). While proprietary models lead, open-weight options are within striking distance for teams willing to trade a few points of performance for full model control.
This ranking is based on provisional weighted averages across the scoring benchmarks in multimodalGrounded tracked by BenchLM.ai. For detailed model profiles, click any model name below. To compare two specific models head-to-head, use the "vs #" links.
Gemini 3.1 Flash-Lite
Google · 1M
Score: 62.7 · $0.4/1M
Best multimodal value. Strong visual reasoning at the lowest cost.
GPT-4.1 nano
OpenAI · 1M
Score: 38.6 · $0.4/1M
Good multimodal value in OpenAI ecosystem.
Gemini 2.5 Flash
Google · 1M
Score: 53 · $0.6/1M
Solid multimodal value with good all-around performance.
Gemini 3.1 Flash-Lite leads multimodal value — best visual reasoning per dollar.
GPT-4.1 nano strong multimodal value in OpenAI's lineup.
Gemini 2.5 Flash good multimodal value with broader capabilities.
Get notified when models move. One email a week with what changed and why.
Free. No spam. Unsubscribe anytime.
The best value model is Gemini 3.1 Flash-Lite by Google with a provisional Score/$ ratio of 156.75 (score: 62.7, output: $0.4/1M tokens).
The best open-weight model is DeepSeek Coder 2.0 at position #6.
33 models are included in this ranking.
Value scores divide the weighted multimodal score by output token price (per 1M tokens). Higher means more capability per dollar. Models with no listed price are excluded.
Value rankings favor cheap models even if absolute performance is modest. A model scoring half as well at one-tenth the price wins on value — but may not meet your quality bar. Always check raw scores alongside value rankings.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.