Ministral 3 14B vs GPT-4o mini

Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.

Ministral 3 14B has the cleaner overall profile here, landing at 55 versus 52. It is a real lead, but still close enough that category-level strengths matter more than the headline number.

Ministral 3 14B's sharpest advantage is in reasoning, where it averages 63.6 against 49.4. The single biggest benchmark swing on the page is SWE-bench Pro, 34 to 65. GPT-4o mini does hit back in coding, so the answer changes if that is the part of the workload you care about most.

GPT-4o mini is also the more expensive model on tokens at $0.15 input / $0.60 output per 1M tokens, versus $0.00 input / $0.00 output per 1M tokens for Ministral 3 14B. That is roughly Infinityx on output cost alone.

Quick Verdict

Pick Ministral 3 14B if you want the stronger benchmark profile. GPT-4o mini only becomes the better choice if coding is the priority.

Agentic

GPT-4o mini

Ministral 3 14B

48.4

GPT-4o mini

50.9

48
Terminal-Bench 2.0
58
55
BrowseComp
49
44
OSWorld-Verified
44

Coding

GPT-4o mini

Ministral 3 14B

33

GPT-4o mini

65

58
HumanEval
87.2
37
SWE-bench Verified
Coming soon
31
LiveCodeBench
Coming soon
34
SWE-bench Pro
65

Multimodal & Grounded

Ministral 3 14B

Ministral 3 14B

70.5

GPT-4o mini

60.2

70
MMMU-Pro
66
71
OfficeQA Pro
53

Reasoning

Ministral 3 14B

Ministral 3 14B

63.6

GPT-4o mini

49.4

66
SimpleQA
Coming soon
64
MuSR
Coming soon
74
BBH
Coming soon
60
LongBench v2
49
60
MRCRv2
50

Knowledge

GPT-4o mini

Ministral 3 14B

50.1

GPT-4o mini

62

69
MMLU
82
68
GPQA
Coming soon
66
SuperGPQA
Coming soon
64
OpenBookQA
Coming soon
67
MMLU-Pro
Coming soon
5
HLE
Coming soon
60
FrontierScience
62

Instruction Following

Coming soon

Comparable scores for this category are coming soon. One or both models do not have sourced results here yet.

80
IFEval
Coming soon

Multilingual

Ministral 3 14B

Ministral 3 14B

76.8

GPT-4o mini

74.7

80
MGSM
87
75
MMLU-ProX
68

Mathematics

Coming soon

Comparable scores for this category are coming soon. One or both models do not have sourced results here yet.

68
AIME 2023
Coming soon
70
AIME 2024
Coming soon
72
AIME 2025
Coming soon
64
HMMT Feb 2023
Coming soon
66
HMMT Feb 2024
Coming soon
65
HMMT Feb 2025
Coming soon
67
BRUMO 2025
Coming soon
72
MATH-500
Coming soon

Frequently Asked Questions

Which is better, Ministral 3 14B or GPT-4o mini?

Ministral 3 14B is ahead overall, 55 to 52. The biggest single separator in this matchup is SWE-bench Pro, where the scores are 34 and 65.

Which is better for knowledge tasks, Ministral 3 14B or GPT-4o mini?

GPT-4o mini has the edge for knowledge tasks in this comparison, averaging 62 versus 50.1. Inside this category, MMLU is the benchmark that creates the most daylight between them.

Which is better for coding, Ministral 3 14B or GPT-4o mini?

GPT-4o mini has the edge for coding in this comparison, averaging 65 versus 33. Inside this category, SWE-bench Pro is the benchmark that creates the most daylight between them.

Which is better for reasoning, Ministral 3 14B or GPT-4o mini?

Ministral 3 14B has the edge for reasoning in this comparison, averaging 63.6 versus 49.4. Inside this category, LongBench v2 is the benchmark that creates the most daylight between them.

Which is better for agentic tasks, Ministral 3 14B or GPT-4o mini?

GPT-4o mini has the edge for agentic tasks in this comparison, averaging 50.9 versus 48.4. Inside this category, Terminal-Bench 2.0 is the benchmark that creates the most daylight between them.

Which is better for multimodal and grounded tasks, Ministral 3 14B or GPT-4o mini?

Ministral 3 14B has the edge for multimodal and grounded tasks in this comparison, averaging 70.5 versus 60.2. Inside this category, OfficeQA Pro is the benchmark that creates the most daylight between them.

Which is better for multilingual tasks, Ministral 3 14B or GPT-4o mini?

Ministral 3 14B has the edge for multilingual tasks in this comparison, averaging 76.8 versus 74.7. Inside this category, MGSM is the benchmark that creates the most daylight between them.

Last updated: March 12, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.