Moonshot v1 vs Ministral 3 8B (Reasoning)

Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.

Moonshot v1 is clearly ahead on the aggregate, 47 to 36. The gap is large enough that you do not need to squint at the spreadsheet to see the difference.

Moonshot v1's sharpest advantage is in multimodal & grounded, where it averages 52.6 against 33.4. The single biggest benchmark swing on the page is MMLU, 53 to 30.

Ministral 3 8B (Reasoning) is the reasoning model in the pair, while Moonshot v1 is not. That usually helps on harder chain-of-thought-heavy tests, but it can also mean more latency and more token spend in real use.

Quick Verdict

Pick Moonshot v1 if you want the stronger benchmark profile. Ministral 3 8B (Reasoning) only becomes the better choice if you want the stronger reasoning-first profile.

Agentic

Moonshot v1

Moonshot v1

42.2

Ministral 3 8B (Reasoning)

38.5

39
Terminal-Bench 2.0
39
49
BrowseComp
41
41
OSWorld-Verified
36

Coding

Moonshot v1

Moonshot v1

26.4

Ministral 3 8B (Reasoning)

15.2

45
HumanEval
24
34
SWE-bench Verified
17
21
LiveCodeBench
16
30
SWE-bench Pro
14

Multimodal & Grounded

Moonshot v1

Moonshot v1

52.6

Ministral 3 8B (Reasoning)

33.4

49
MMMU-Pro
28
57
OfficeQA Pro
40

Reasoning

Moonshot v1

Moonshot v1

55.5

Ministral 3 8B (Reasoning)

42.1

51
SimpleQA
32
49
MuSR
33
73
BBH
70
58
LongBench v2
44
56
MRCRv2
47

Knowledge

Moonshot v1

Moonshot v1

42.3

Ministral 3 8B (Reasoning)

30

53
MMLU
30
52
GPQA
29
50
SuperGPQA
27
48
OpenBookQA
25
64
MMLU-Pro
54
5
HLE
5
49
FrontierScience
34

Instruction Following

Moonshot v1

Moonshot v1

77

Ministral 3 8B (Reasoning)

70

77
IFEval
70

Multilingual

Moonshot v1

Moonshot v1

69.8

Ministral 3 8B (Reasoning)

61.7

73
MGSM
63
68
MMLU-ProX
61

Mathematics

Moonshot v1

Moonshot v1

61

Ministral 3 8B (Reasoning)

47.8

53
AIME 2023
33
55
AIME 2024
35
54
AIME 2025
34
49
HMMT Feb 2023
29
51
HMMT Feb 2024
31
50
HMMT Feb 2025
30
52
BRUMO 2025
32
72
MATH-500
67

Frequently Asked Questions

Which is better, Moonshot v1 or Ministral 3 8B (Reasoning)?

Moonshot v1 is ahead overall, 47 to 36. The biggest single separator in this matchup is MMLU, where the scores are 53 and 30.

Which is better for knowledge tasks, Moonshot v1 or Ministral 3 8B (Reasoning)?

Moonshot v1 has the edge for knowledge tasks in this comparison, averaging 42.3 versus 30. Inside this category, MMLU is the benchmark that creates the most daylight between them.

Which is better for coding, Moonshot v1 or Ministral 3 8B (Reasoning)?

Moonshot v1 has the edge for coding in this comparison, averaging 26.4 versus 15.2. Inside this category, HumanEval is the benchmark that creates the most daylight between them.

Which is better for math, Moonshot v1 or Ministral 3 8B (Reasoning)?

Moonshot v1 has the edge for math in this comparison, averaging 61 versus 47.8. Inside this category, AIME 2023 is the benchmark that creates the most daylight between them.

Which is better for reasoning, Moonshot v1 or Ministral 3 8B (Reasoning)?

Moonshot v1 has the edge for reasoning in this comparison, averaging 55.5 versus 42.1. Inside this category, SimpleQA is the benchmark that creates the most daylight between them.

Which is better for agentic tasks, Moonshot v1 or Ministral 3 8B (Reasoning)?

Moonshot v1 has the edge for agentic tasks in this comparison, averaging 42.2 versus 38.5. Inside this category, BrowseComp is the benchmark that creates the most daylight between them.

Which is better for multimodal and grounded tasks, Moonshot v1 or Ministral 3 8B (Reasoning)?

Moonshot v1 has the edge for multimodal and grounded tasks in this comparison, averaging 52.6 versus 33.4. Inside this category, MMMU-Pro is the benchmark that creates the most daylight between them.

Which is better for instruction following, Moonshot v1 or Ministral 3 8B (Reasoning)?

Moonshot v1 has the edge for instruction following in this comparison, averaging 77 versus 70. Inside this category, IFEval is the benchmark that creates the most daylight between them.

Which is better for multilingual tasks, Moonshot v1 or Ministral 3 8B (Reasoning)?

Moonshot v1 has the edge for multilingual tasks in this comparison, averaging 69.8 versus 61.7. Inside this category, MGSM is the benchmark that creates the most daylight between them.

Last updated: March 12, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.