GPT-OSS 20B vs Ministral 3 8B

Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.

GPT-OSS 20B has the cleaner overall profile here, landing at 35 versus 32. It is a real lead, but still close enough that category-level strengths matter more than the headline number.

GPT-OSS 20B's sharpest advantage is in agentic, where it averages 35.4 against 28.9. The single biggest benchmark swing on the page is LongBench v2, 48 to 38. Ministral 3 8B does hit back in instruction following, so the answer changes if that is the part of the workload you care about most.

Quick Verdict

Pick GPT-OSS 20B if you want the stronger benchmark profile. Ministral 3 8B only becomes the better choice if instruction following is the priority.

Agentic

GPT-OSS 20B

GPT-OSS 20B

35.4

Ministral 3 8B

28.9

35
Terminal-Bench 2.0
26
42
BrowseComp
36
31
OSWorld-Verified
27

Coding

GPT-OSS 20B

GPT-OSS 20B

14.5

Ministral 3 8B

14.2

23
HumanEval
23
14
SWE-bench Verified
16
11
LiveCodeBench
15
18
SWE-bench Pro
13

Multimodal & Grounded

GPT-OSS 20B

GPT-OSS 20B

36

Ministral 3 8B

32.4

31
MMMU-Pro
27
42
OfficeQA Pro
39

Reasoning

GPT-OSS 20B

GPT-OSS 20B

40.4

Ministral 3 8B

36.1

29
SimpleQA
28
27
MuSR
26
62
BBH
63
48
LongBench v2
38
48
MRCRv2
41

Knowledge

GPT-OSS 20B

GPT-OSS 20B

29

Ministral 3 8B

28

31
MMLU
28
30
GPQA
27
28
SuperGPQA
25
26
OpenBookQA
23
53
MMLU-Pro
52
1
HLE
3
34
FrontierScience
32

Instruction Following

Ministral 3 8B

GPT-OSS 20B

67

Ministral 3 8B

69

67
IFEval
69

Multilingual

Ministral 3 8B

GPT-OSS 20B

59.7

Ministral 3 8B

61.7

61
MGSM
63
59
MMLU-ProX
61

Mathematics

Ministral 3 8B

GPT-OSS 20B

43.1

Ministral 3 8B

43.3

31
AIME 2023
30
33
AIME 2024
32
32
AIME 2025
33
27
HMMT Feb 2023
26
29
HMMT Feb 2024
28
28
HMMT Feb 2025
27
30
BRUMO 2025
29
59
MATH-500
60

Frequently Asked Questions

Which is better, GPT-OSS 20B or Ministral 3 8B?

GPT-OSS 20B is ahead overall, 35 to 32. The biggest single separator in this matchup is LongBench v2, where the scores are 48 and 38.

Which is better for knowledge tasks, GPT-OSS 20B or Ministral 3 8B?

GPT-OSS 20B has the edge for knowledge tasks in this comparison, averaging 29 versus 28. Inside this category, MMLU is the benchmark that creates the most daylight between them.

Which is better for coding, GPT-OSS 20B or Ministral 3 8B?

GPT-OSS 20B has the edge for coding in this comparison, averaging 14.5 versus 14.2. Inside this category, SWE-bench Pro is the benchmark that creates the most daylight between them.

Which is better for math, GPT-OSS 20B or Ministral 3 8B?

Ministral 3 8B has the edge for math in this comparison, averaging 43.3 versus 43.1. Inside this category, AIME 2023 is the benchmark that creates the most daylight between them.

Which is better for reasoning, GPT-OSS 20B or Ministral 3 8B?

GPT-OSS 20B has the edge for reasoning in this comparison, averaging 40.4 versus 36.1. Inside this category, LongBench v2 is the benchmark that creates the most daylight between them.

Which is better for agentic tasks, GPT-OSS 20B or Ministral 3 8B?

GPT-OSS 20B has the edge for agentic tasks in this comparison, averaging 35.4 versus 28.9. Inside this category, Terminal-Bench 2.0 is the benchmark that creates the most daylight between them.

Which is better for multimodal and grounded tasks, GPT-OSS 20B or Ministral 3 8B?

GPT-OSS 20B has the edge for multimodal and grounded tasks in this comparison, averaging 36 versus 32.4. Inside this category, MMMU-Pro is the benchmark that creates the most daylight between them.

Which is better for instruction following, GPT-OSS 20B or Ministral 3 8B?

Ministral 3 8B has the edge for instruction following in this comparison, averaging 69 versus 67. Inside this category, IFEval is the benchmark that creates the most daylight between them.

Which is better for multilingual tasks, GPT-OSS 20B or Ministral 3 8B?

Ministral 3 8B has the edge for multilingual tasks in this comparison, averaging 61.7 versus 59.7. Inside this category, MGSM is the benchmark that creates the most daylight between them.

Last updated: March 12, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.