Skip to main content

American Invitational Mathematics Examination 2024 (AIME 2024)

The 2024 edition of AIME, maintaining the same format of 15 challenging mathematics problems with integer answers from 000 to 999.

Benchmark score on AIME 2024 — April 20, 2026

BenchLM mirrors the published score view for AIME 2024. o3-mini leads the public snapshot at 87.3%. BenchLM does not use these results to rank models overall.

1 modelsMathRefreshingDisplay onlyUpdated April 20, 2026

About AIME 2024

Year

2024

Tasks

15 problems

Format

Integer answers 000-999

Difficulty

High school olympiad level

AIME 2024 continues the tradition of challenging mathematical reasoning problems. These problems test deep understanding of mathematical concepts and creative problem-solving abilities.

BenchLM freshness & provenance

Version

AIME 2024 2024

Refresh cadence

Annual

Staleness state

Refreshing

Question availability

Public benchmark set

RefreshingDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (1 models)

1
87.3%

FAQ

What does AIME 2024 measure?

The 2024 edition of AIME, maintaining the same format of 15 challenging mathematics problems with integer answers from 000 to 999.

Which model scores highest on AIME 2024?

o3-mini by OpenAI currently leads with a score of 87.3% on AIME 2024.

How many models are evaluated on AIME 2024?

1 AI models have been evaluated on AIME 2024 on BenchLM.

Last updated: April 20, 2026 · BenchLM version AIME 2024 2024

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.