Skip to main content

GPT-4.1

OpenAIEstablishedReleased Apr 14, 2025
Overall Score
Est. 60Prov. #47 of 112
Arena Elo
1413
Categories Ranked
5of 8
Price (1M tokens)
$2 in / $8 out
Speed
108tok/s
Context
1M
ProprietaryNon-Reasoning
Confidence
base

According to BenchLM.ai, GPT-4.1 ranks #47 out of 112 models on the provisional leaderboard with an overall score of 60/100. It does not yet have enough sourced coverage for BenchLM's verified leaderboard. While not a frontier model, it offers specific advantages depending on the use case.

GPT-4.1 is a proprietary model with a 1M token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.

GPT-4.1 sits inside the GPT-4.1 family alongside GPT-4.1 mini, GPT-4.1 nano. BenchLM links it directly to GPT-4o as the earlier related model in that lineage. This profile currently has 4 of 153 tracked benchmarks. BenchLM only exposes non-generated benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.

Its strongest category is Reasoning (#17), while its weakest is Multilingual (#70). This performance profile makes it particularly strong for complex reasoning, multi-step problem solving, and analytical tasks.

Ranking Distribution

Category rank across 7 benchmark categories — sorted by best rank

Category Performance

Scores across all benchmark categories (0-100 scale)

Category Breakdown

Agentic

#33
60.0/ 100
Weight: 22%0 benchmarks
Terminal-Bench 2.0BrowseCompOSWorld-VerifiedGAIATAU-benchWebArena

Coding

56.8/ 100
Weight: 20%1 benchmark
SWE-bench VerifiedLiveCodeBenchSWE-bench ProSWE-RebenchSciCode

Reasoning

#17
77.3/ 100
Weight: 17%0 benchmarks
MuSRLongBench v2MRCRv2ARC-AGI-2

Knowledge

52.2/ 100
Weight: 12%2 benchmarks
GPQASuperGPQAMMLU-ProHLEFrontierScienceSimpleQA

Math

0.0/ 100
Weight: 5%0 benchmarks
AIME 2025BRUMO 2025MATH-500FrontierMath

Multilingual

#70
35.3/ 100
Weight: 7%0 benchmarks
MGSMMMLU-ProX

Multimodal

#51
63.9/ 100
Weight: 12%0 benchmarks
MMMU-ProOfficeQA Pro

Inst. Following

#41
75.6/ 100
Weight: 5%1 benchmark
IFEvalIFBench

Chatbot Arena Performance

Text Overall1413

Benchmark Details

Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.

GPT-4.1 Family

Base entry

Related Earlier Model

GPT-4o

Frequently Asked Questions

How does GPT-4.1 perform overall in AI benchmarks?

GPT-4.1 currently ranks #47 out of 112 models on BenchLM's provisional leaderboard with an overall score of 60 (estimated). It is created by OpenAI and features a 1M context window.

Is GPT-4.1 good for knowledge and understanding?

GPT-4.1 has visible benchmark coverage in knowledge and understanding, but BenchLM does not currently assign it a global category rank there.

Is GPT-4.1 good for coding and programming?

GPT-4.1 has visible benchmark coverage in coding and programming, but BenchLM does not currently assign it a global category rank there.

Is GPT-4.1 good for instruction following?

GPT-4.1 ranks #41 out of 112 models in instruction following benchmarks with an average score of 75.6. There are stronger options in this category.

Which sibling models are related to GPT-4.1?

GPT-4.1 belongs to the GPT-4.1 family. Related variants on BenchLM include GPT-4.1 mini, GPT-4.1 nano.

Does GPT-4.1 have full benchmark coverage on BenchLM?

Not yet. GPT-4.1 currently has 4 published benchmark scores out of the 153 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.

What is the context window size of GPT-4.1?

GPT-4.1 has a context window of 1M, which determines how much text it can process in a single interaction.

Last updated: April 23, 2026 · Runtime metrics stay blank until BenchLM has a sourced snapshot.

Don't miss the next GPT moment

Which models moved up, what’s new, and what it costs. One email a week, 3-min read.

Free. One email per week.