Ministral 3 14B Benchmark Scores & Performance

Benchmark analysis of Ministral 3 14B by Mistral across 32 sourced tests on BenchLM.

According to BenchLM.ai, Ministral 3 14B ranks #70 out of 123 models with an overall score of 55/100. While not a frontier model, it offers specific advantages depending on the use case.

Ministral 3 14B is a open weight model with a 128K token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.

Ministral 3 14B sits inside the Ministral 3 14B family alongside Ministral 3 14B (Reasoning).

Its strongest category is Multimodal & Grounded (#52), while its weakest is Knowledge (#75). This performance profile makes it particularly strong for screenshots, documents, charts, and grounded multimodal workflows.

Creator

Mistral

Source Type

Open Weight

Reasoning

Non-Reasoning

Context Window

128K

Overall Score

55#70 of 123

Arena Elo

1233

Family & Lineage

Family

Ministral 3 14B

Base entry

Knowledge Benchmarks

MMLU
69
GPQA
68
SuperGPQA
66
OpenBookQA
64
MMLU-Pro
67
HLE
5
FrontierScience
60

Coding Benchmarks

HumanEval
58
SWE-bench Verified
37
LiveCodeBench
31
SWE-bench Pro
34

Mathematics Benchmarks

AIME 2023
68
AIME 2024
70
AIME 2025
72
HMMT Feb 2023
64
HMMT Feb 2024
66
HMMT Feb 2025
65
BRUMO 2025
67
MATH-500
72

Reasoning Benchmarks

SimpleQA
66
MuSR
64
BBH
74
LongBench v2
60
MRCRv2
60

Agentic Benchmarks

Terminal-Bench 2.0
48
BrowseComp
55
OSWorld-Verified
44

Multimodal & Grounded Benchmarks

MMMU-Pro
70
OfficeQA Pro
71

Instruction Following Benchmarks

IFEval
80

Multilingual Benchmarks

MGSM
80
MMLU-ProX
75

Frequently Asked Questions

How does Ministral 3 14B perform overall in AI benchmarks?

Ministral 3 14B ranks #70 out of 123 models with an overall score of 55. It is created by Mistral and features a 128K context window.

Is Ministral 3 14B good for knowledge and understanding?

Ministral 3 14B ranks #75 out of 123 models in knowledge and understanding benchmarks with an average score of 50.1. There are stronger options in this category.

Is Ministral 3 14B good for coding and programming?

Ministral 3 14B ranks #72 out of 123 models in coding and programming benchmarks with an average score of 33. There are stronger options in this category.

Is Ministral 3 14B good for mathematics?

Ministral 3 14B ranks #69 out of 123 models in mathematics benchmarks with an average score of 69.7. There are stronger options in this category.

Is Ministral 3 14B good for reasoning and logic?

Ministral 3 14B ranks #75 out of 123 models in reasoning and logic benchmarks with an average score of 63.6. There are stronger options in this category.

Is Ministral 3 14B good for agentic tool use and computer tasks?

Ministral 3 14B ranks #74 out of 123 models in agentic tool use and computer tasks benchmarks with an average score of 48.4. There are stronger options in this category.

Is Ministral 3 14B good for multimodal and grounded tasks?

Ministral 3 14B ranks #52 out of 123 models in multimodal and grounded tasks benchmarks with an average score of 70.5. There are stronger options in this category.

Is Ministral 3 14B good for instruction following?

Ministral 3 14B ranks #71 out of 123 models in instruction following benchmarks with an average score of 80. There are stronger options in this category.

Is Ministral 3 14B good for multilingual tasks?

Ministral 3 14B ranks #65 out of 123 models in multilingual tasks benchmarks with an average score of 76.8. There are stronger options in this category.

Is Ministral 3 14B open source?

Yes, Ministral 3 14B is an open weight model created by Mistral, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

Which sibling models are related to Ministral 3 14B?

Ministral 3 14B belongs to the Ministral 3 14B family. Related variants on BenchLM include Ministral 3 14B (Reasoning).

What is the context window size of Ministral 3 14B?

Ministral 3 14B has a context window of 128K, which determines how much text it can process in a single interaction.

Last updated: March 12, 2026

Weekly LLM Updates

New model releases, benchmark scores, and leaderboard changes. Every Friday.

Free. Your signup is stored with a derived country code for compliance routing.