Head-to-head comparison across 0benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
Laguna M.1
46
Ministral 3 14B (Reasoning)
--
Benchmark data for Laguna M.1 and Ministral 3 14B (Reasoning) is coming soon on BenchLM.
Laguna M.1
Ministral 3 14B (Reasoning)
$0 / $0
$0.2 / $0.2
N/A
N/A
N/A
N/A
131K
128K
Benchmark data for Laguna M.1 and Ministral 3 14B (Reasoning) is coming soon on BenchLM.
BenchLM does not have sourced benchmark coverage for Ministral 3 14B (Reasoning) yet. This comparison is currently limited to metadata such as context window, reasoning mode, and pricing where available.
Ministral 3 14B (Reasoning) is priced at $0.20 input / $0.20 output per 1M tokens, versus $0.00 input / $0.00 output per 1M tokens for Laguna M.1. Laguna M.1 has the larger context window at 131K, compared with 128K for Ministral 3 14B (Reasoning).
Not fully yet. BenchLM is tracking both models, but the sourced benchmark breakdown for this comparison is still coming soon.
BenchLM only shows category winners and benchmark-level calls when we have sourced results that can be compared fairly. For these models, the public benchmark coverage is not complete enough yet.
Laguna M.1: $0.00 input / $0.00 output per 1M tokens Ministral 3 14B (Reasoning): $0.20 input / $0.20 output per 1M tokens Both model pages still include creator, context window, reasoning mode, and other metadata while benchmark coverage fills in.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.