Skip to main content

Gemini 3 Pro

GoogleCurrentReleased Nov 18, 2025
Overall Score
Est. 83Prov. #14 of 110
Arena Elo
1486
Categories Ranked
8of 8
Price (1M tokens)
$ in / $ out
Speed
109tok/s
Context
2M
ProprietaryNon-Reasoning
Confidence
base

According to BenchLM.ai, Gemini 3 Pro ranks #14 out of 110 models on the provisional leaderboard with an overall score of 83/100. It does not yet have enough sourced coverage for BenchLM's verified leaderboard. This places it in the upper tier of AI models, with competitive scores across most benchmark categories.

Gemini 3 Pro is a proprietary model with a 2M token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.

Gemini 3 Pro sits inside the Gemini 3 Pro family alongside Gemini 3 Pro Deep Think. This profile currently has 7 of 152 tracked benchmarks. BenchLM only exposes non-generated benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.

Its strongest category is Reasoning (#12), while its weakest is Instruction Following (#34). This performance profile makes it particularly strong for complex reasoning, multi-step problem solving, and analytical tasks.

Ranking Distribution

Category rank across 8 benchmark categories — sorted by best rank

Category Performance

Scores across all benchmark categories (0-100 scale)

Category Breakdown

Agentic

#17
75.0/ 100
Weight: 22%0 benchmarks
Terminal-Bench 2.0BrowseCompOSWorld-VerifiedGAIATAU-benchWebArena

Coding

#25
74.9/ 100
Weight: 20%0 benchmarks
SWE-bench VerifiedLiveCodeBenchSWE-bench ProSWE-RebenchSciCode

Reasoning

#12
82.8/ 100
Weight: 17%1 benchmark
MuSRLongBench v2MRCRv2ARC-AGI-2

Knowledge

#13
83.6/ 100
Weight: 12%0 benchmarks
GPQASuperGPQAMMLU-ProHLEFrontierScienceSimpleQA

Math

#20
83.0/ 100
Weight: 5%0 benchmarks
AIME 2025BRUMO 2025MATH-500FrontierMath

Multilingual

#20
81.7/ 100
Weight: 7%0 benchmarks
MGSMMMLU-ProX

Multimodal

#17
86.0/ 100
Weight: 12%6 benchmarks
MMMU-ProOfficeQA Pro

Inst. Following

#34
79.1/ 100
Weight: 5%0 benchmarks
IFEvalIFBench

Chatbot Arena Performance

Text Overall1486CI: ±3.941,404 votes
Coding1519CI: ±7.28,589 votes
Math1478CI: ±11.52,664 votes
Instruction Following1474CI: ±6.511,210 votes
Creative Writing1485CI: ±8.46,311 votes
Multi-turn1495CI: ±8.06,772 votes
Hard Prompts1504CI: ±5.122,495 votes
Hard Prompts (English)1503CI: ±6.610,731 votes
Longer Query1492CI: ±6.710,531 votes

Benchmark Details

Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.

Gemini 3 Pro Family

Base entry

Frequently Asked Questions

How does Gemini 3 Pro perform overall in AI benchmarks?

Gemini 3 Pro currently ranks #14 out of 110 models on BenchLM's provisional leaderboard with an overall score of 83 (estimated). It is created by Google and features a 2M context window.

Is Gemini 3 Pro good for reasoning and logic?

Gemini 3 Pro ranks #12 out of 110 models in reasoning and logic benchmarks with an average score of 82.8. There are stronger options in this category.

Is Gemini 3 Pro good for multimodal and grounded tasks?

Gemini 3 Pro ranks #17 out of 110 models in multimodal and grounded tasks benchmarks with an average score of 86. There are stronger options in this category.

Which sibling models are related to Gemini 3 Pro?

Gemini 3 Pro belongs to the Gemini 3 Pro family. Related variants on BenchLM include Gemini 3 Pro Deep Think.

Does Gemini 3 Pro have full benchmark coverage on BenchLM?

Not yet. Gemini 3 Pro currently has 7 published benchmark scores out of the 152 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.

What is the context window size of Gemini 3 Pro?

Gemini 3 Pro has a context window of 2M, which determines how much text it can process in a single interaction.

Last updated: April 20, 2026 · Runtime metrics stay blank until BenchLM has a sourced snapshot.

Don't miss the next GPT moment

Which models moved up, what’s new, and what it costs. One email a week, 3-min read.

Free. One email per week.