o1-pro Benchmark Scores & Performance

Benchmark analysis of o1-pro by OpenAI across 2 tests.

According to BenchLM.ai, o1-pro ranks #96 out of 100 models with an overall score of 33/100. While not a frontier model, it offers specific advantages depending on the use case.

o1-pro is a proprietary model with a 200K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.

o1-pro sits inside the o1 family alongside o1, o1-preview. This profile currently has 2 of 22 tracked benchmarks, so the overall score is conservative until the rest of the suite is filled in.

Its strongest category is Knowledge (#19), while its weakest is Coding (#98). This performance profile makes it particularly effective for knowledge-intensive tasks like research, analysis, and factual Q&A.

Creator

OpenAI

Source Type

Proprietary

Reasoning

Reasoning

Context Window

200K

Overall Score

33#96 of 100

Family & Lineage

Family

o1

Pro

Canonical Entry

o1

Sibling Models

Knowledge Benchmarks

GPQA
79

Mathematics Benchmarks

AIME 2024
86

Frequently Asked Questions

How does o1-pro perform overall in AI benchmarks?

o1-pro ranks #96 out of 100 models with an overall score of 33. It is created by OpenAI and features a 200K context window.

Is o1-pro good for knowledge and understanding?

o1-pro ranks #19 out of 100 models in knowledge and understanding benchmarks with an average score of 79. There are stronger options in this category.

Is o1-pro good for mathematics?

o1-pro ranks #29 out of 100 models in mathematics benchmarks with an average score of 86. There are stronger options in this category.

Which sibling models are related to o1-pro?

o1-pro belongs to the o1 family. Related variants on BenchLM include o1, o1-preview.

Does o1-pro have full benchmark coverage on BenchLM?

Not yet. o1-pro currently has 2 sourced benchmark scores out of the 22 benchmarks BenchLM tracks, so its overall score is intentionally conservative until more results are added.

What is the context window size of o1-pro?

o1-pro has a context window of 200K tokens, which determines how much text it can process in a single interaction.

Last updated: March 9, 2026

Weekly LLM Updates

New model releases, benchmark scores, and leaderboard changes. Every Friday.

Free. Your signup is stored with a derived country code for compliance routing.