o1-preview vs o1-pro

Side-by-side benchmark comparison across knowledge, coding, math, and reasoning.

Sibling matchup inside the o1 family.

o1-preview and o1-pro sit in the same o1 family. This page is less about two unrelated model lineages and more about how the siblings trade off on benchmark shape, token costs, and practical limits like context window.

o1-preview is clearly ahead on the aggregate, 83 to 33. The gap is large enough that you do not need to squint at the spreadsheet to see the difference.

o1-preview's sharpest advantage is in mathematics, where it averages 93.1 against 86. The single biggest benchmark swing on the page is GPQA, 90 to 79. o1-pro does hit back in knowledge, so the answer changes if that is the part of the workload you care about most.

Quick Verdict

o1-preview makes more sense if mathematics is the priority, while o1-pro is the cleaner fit if knowledge is the priority.

Knowledge

o1-pro

o1-preview

78

o1-pro

79

92
MMLU
-
90
GPQA
79
88
SuperGPQA
-
86
OpenBookQA
-
80
MMLU-Pro
-
32
HLE
-

Coding

o1-preview
86
HumanEval
-
65
SWE-bench Verified
-
60
LiveCodeBench
-

Mathematics

o1-preview

o1-preview

93.1

o1-pro

86

94
AIME 2023
-
96
AIME 2024
86
95
AIME 2025
-
90
HMMT Feb 2023
-
92
HMMT Feb 2024
-
91
HMMT Feb 2025
-
93
BRUMO 2025
-
94
MATH-500
-

Reasoning

o1-preview
88
SimpleQA
-
86
MuSR
-
93
BBH
-

Instruction Following

o1-preview
88
IFEval
-

Multilingual

o1-preview
90
MGSM
-

Frequently Asked Questions

Which is better, o1-preview or o1-pro?

o1-preview and o1-pro are sibling variants in the o1 family, so the right pick depends on whether you value the better benchmark line, cheaper tokens, or the larger context window. o1-preview is ahead overall 83 to 33.

Which is better for knowledge tasks, o1-preview or o1-pro?

o1-pro has the edge for knowledge tasks in this comparison, averaging 79 versus 78. Inside this category, GPQA is the benchmark that creates the most daylight between them.

Which is better for math, o1-preview or o1-pro?

o1-preview has the edge for math in this comparison, averaging 93.1 versus 86. Inside this category, AIME 2024 is the benchmark that creates the most daylight between them.

Last updated: March 9, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.