Skip to main content

o3-mini

OpenAIEstablishedReleased Jan 31, 2025
Overall Score
Est. 58Prov. #52 of 110
Arena Elo
1348
Categories Ranked
5of 8
Price (1M tokens)
$1.1 in / $4.4 out
Speed
160tok/s
Context
200K
ProprietaryReasoning
Confidence
mini

According to BenchLM.ai, o3-mini ranks #52 out of 110 models on the provisional leaderboard with an overall score of 58/100. It does not yet have enough sourced coverage for BenchLM's verified leaderboard. While not a frontier model, it offers specific advantages depending on the use case.

o3-mini is a proprietary model with a 200K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.

o3-mini sits inside the o3 family alongside o3, o3-pro. This profile currently has 5 of 152 tracked benchmarks. BenchLM only exposes non-generated benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.

Its strongest category is Instruction Following (#16), while its weakest is Multilingual (#59). This performance profile makes it a well-rounded choice across a range of tasks.

Ranking Distribution

Category rank across 7 benchmark categories — sorted by best rank

Category Performance

Scores across all benchmark categories (0-100 scale)

Category Breakdown

Agentic

#25
65.9/ 100
Weight: 22%0 benchmarks
Terminal-Bench 2.0BrowseCompOSWorld-VerifiedGAIATAU-benchWebArena

Coding

51.1/ 100
Weight: 20%1 benchmark
SWE-bench VerifiedLiveCodeBenchSWE-bench ProSWE-RebenchSciCode

Reasoning

#28
68.0/ 100
Weight: 17%0 benchmarks
MuSRLongBench v2MRCRv2ARC-AGI-2

Knowledge

61.3/ 100
Weight: 12%2 benchmarks
GPQASuperGPQAMMLU-ProHLEFrontierScienceSimpleQA

Math

0.0/ 100
Weight: 5%1 benchmark
AIME 2025BRUMO 2025MATH-500FrontierMath

Multilingual

#59
47.1/ 100
Weight: 7%0 benchmarks
MGSMMMLU-ProX

Multimodal

#44
65.1/ 100
Weight: 12%0 benchmarks
MMMU-ProOfficeQA Pro

Inst. Following

#16
86.8/ 100
Weight: 5%1 benchmark
IFEvalIFBench

Chatbot Arena Performance

Text Overall1348CI: ±3.557,373 votes
Coding1415CI: ±6.49,461 votes
Math1382CI: ±8.44,722 votes
Instruction Following1343CI: ±5.116,961 votes
Creative Writing1301CI: ±6.98,190 votes
Multi-turn1340CI: ±6.59,491 votes
Hard Prompts1370CI: ±4.920,114 votes
Hard Prompts (English)1383CI: ±6.011,294 votes
Longer Query1358CI: ±6.59,231 votes

Benchmark Details

Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.

o3 Family

Mini

Canonical Entry

o3

Frequently Asked Questions

How does o3-mini perform overall in AI benchmarks?

o3-mini currently ranks #52 out of 110 models on BenchLM's provisional leaderboard with an overall score of 58 (estimated). It is created by OpenAI and features a 200K context window.

Is o3-mini good for knowledge and understanding?

o3-mini has visible benchmark coverage in knowledge and understanding, but BenchLM does not currently assign it a global category rank there.

Is o3-mini good for coding and programming?

o3-mini has visible benchmark coverage in coding and programming, but BenchLM does not currently assign it a global category rank there.

Is o3-mini good for mathematics?

o3-mini has visible benchmark coverage in mathematics, but BenchLM does not currently assign it a global category rank there.

Is o3-mini good for instruction following?

o3-mini ranks #16 out of 110 models in instruction following benchmarks with an average score of 86.8. There are stronger options in this category.

Which sibling models are related to o3-mini?

o3-mini belongs to the o3 family. Related variants on BenchLM include o3, o3-pro.

Does o3-mini have full benchmark coverage on BenchLM?

Not yet. o3-mini currently has 5 published benchmark scores out of the 152 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.

What is the context window size of o3-mini?

o3-mini has a context window of 200K, which determines how much text it can process in a single interaction.

Last updated: April 21, 2026 · Runtime metrics stay blank until BenchLM has a sourced snapshot.

Don't miss the next GPT moment

Which models moved up, what’s new, and what it costs. One email a week, 3-min read.

Free. One email per week.