Mercury 2 vs GPT-OSS 20B

Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.

Mercury 2 is clearly ahead on the aggregate, 65 to 35. The gap is large enough that you do not need to squint at the spreadsheet to see the difference.

Mercury 2's sharpest advantage is in reasoning, where it averages 80.1 against 40.4. The single biggest benchmark swing on the page is MuSR, 82 to 27.

Mercury 2 is the reasoning model in the pair, while GPT-OSS 20B is not. That usually helps on harder chain-of-thought-heavy tests, but it can also mean more latency and more token spend in real use.

Quick Verdict

Pick Mercury 2 if you want the stronger benchmark profile. GPT-OSS 20B only becomes the better choice if you would rather avoid the extra latency and token burn of a reasoning model.

Agentic

Mercury 2

Mercury 2

63.7

GPT-OSS 20B

35.4

63
Terminal-Bench 2.0
35
67
BrowseComp
42
62
OSWorld-Verified
31

Coding

Mercury 2

Mercury 2

41.1

GPT-OSS 20B

14.5

75
HumanEval
23
46
SWE-bench Verified
14
38
LiveCodeBench
11
43
SWE-bench Pro
18

Multimodal & Grounded

Mercury 2

Mercury 2

68.3

GPT-OSS 20B

36

66
MMMU-Pro
31
71
OfficeQA Pro
42

Reasoning

Mercury 2

Mercury 2

80.1

GPT-OSS 20B

40.4

82
SimpleQA
29
82
MuSR
27
87
BBH
62
77
LongBench v2
48
76
MRCRv2
48

Knowledge

Mercury 2

Mercury 2

57.2

GPT-OSS 20B

29

78
MMLU
31
78
GPQA
30
76
SuperGPQA
28
74
OpenBookQA
26
72
MMLU-Pro
53
9
HLE
1
69
FrontierScience
34

Instruction Following

Mercury 2

Mercury 2

84

GPT-OSS 20B

67

84
IFEval
67

Multilingual

Mercury 2

Mercury 2

79.7

GPT-OSS 20B

59.7

81
MGSM
61
79
MMLU-ProX
59

Mathematics

Mercury 2

Mercury 2

80.9

GPT-OSS 20B

43.1

81
AIME 2023
31
83
AIME 2024
33
82
AIME 2025
32
77
HMMT Feb 2023
27
79
HMMT Feb 2024
29
78
HMMT Feb 2025
28
80
BRUMO 2025
30
82
MATH-500
59

Frequently Asked Questions

Which is better, Mercury 2 or GPT-OSS 20B?

Mercury 2 is ahead overall, 65 to 35. The biggest single separator in this matchup is MuSR, where the scores are 82 and 27.

Which is better for knowledge tasks, Mercury 2 or GPT-OSS 20B?

Mercury 2 has the edge for knowledge tasks in this comparison, averaging 57.2 versus 29. Inside this category, GPQA is the benchmark that creates the most daylight between them.

Which is better for coding, Mercury 2 or GPT-OSS 20B?

Mercury 2 has the edge for coding in this comparison, averaging 41.1 versus 14.5. Inside this category, HumanEval is the benchmark that creates the most daylight between them.

Which is better for math, Mercury 2 or GPT-OSS 20B?

Mercury 2 has the edge for math in this comparison, averaging 80.9 versus 43.1. Inside this category, AIME 2023 is the benchmark that creates the most daylight between them.

Which is better for reasoning, Mercury 2 or GPT-OSS 20B?

Mercury 2 has the edge for reasoning in this comparison, averaging 80.1 versus 40.4. Inside this category, MuSR is the benchmark that creates the most daylight between them.

Which is better for agentic tasks, Mercury 2 or GPT-OSS 20B?

Mercury 2 has the edge for agentic tasks in this comparison, averaging 63.7 versus 35.4. Inside this category, OSWorld-Verified is the benchmark that creates the most daylight between them.

Which is better for multimodal and grounded tasks, Mercury 2 or GPT-OSS 20B?

Mercury 2 has the edge for multimodal and grounded tasks in this comparison, averaging 68.3 versus 36. Inside this category, MMMU-Pro is the benchmark that creates the most daylight between them.

Which is better for instruction following, Mercury 2 or GPT-OSS 20B?

Mercury 2 has the edge for instruction following in this comparison, averaging 84 versus 67. Inside this category, IFEval is the benchmark that creates the most daylight between them.

Which is better for multilingual tasks, Mercury 2 or GPT-OSS 20B?

Mercury 2 has the edge for multilingual tasks in this comparison, averaging 79.7 versus 59.7. Inside this category, MGSM is the benchmark that creates the most daylight between them.

Last updated: March 12, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.