Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.
Gemini 3 Pro Deep Think
80
Winner · 0/8 categoriesTrinity-Large-Thinking
~0
0/8 categoriesGemini 3 Pro Deep Think· Trinity-Large-Thinking
Benchmark data for Gemini 3 Pro Deep Think and Trinity-Large-Thinking is coming soon on BenchLM.
BenchLM has partial data for these models, but not enough overlapping benchmark coverage to produce a fair score-level comparison yet.
Gemini 3 Pro Deep Think has the larger context window at 2M, compared with 512K for Trinity-Large-Thinking.
BenchLM keeps the benchmark table and the operator tradeoffs on the same page so a better score does not hide a materially slower, pricier, or smaller-context model.
Runtime metrics show N/A when BenchLM does not have a sourced snapshot for that exact model. The scoring rules and freshness policy are documented on the methodology page.
| Benchmark | Gemini 3 Pro Deep Think | Trinity-Large-Thinking |
|---|---|---|
| Agentic | ||
| Terminal-Bench 2.0 | 77% | — |
| BrowseComp | 87% | — |
| OSWorld-Verified | 73% | — |
| Tau2-Airline | — | 88.0% |
| Tau2-Telecom | — | 94.7% |
| PinchBench | — | 91.9% |
| BFCL v4 | — | 70.1% |
| Coding | ||
| HumanEval | 91% | — |
| SWE-bench Verified | 58% | — |
| LiveCodeBench | 58% | — |
| SWE-bench Pro | 63% | — |
| SWE-bench Verified* | — | 63.2% |
| Multimodal & Grounded | ||
| MMMU-Pro | 95% | — |
| OfficeQA Pro | 95% | — |
| Reasoning | ||
| MuSR | 93% | — |
| BBH | 95% | — |
| LongBench v2 | 94% | — |
| MRCRv2 | 96% | — |
| ARC-AGI-2 | 45.1% | — |
| Knowledge | ||
| MMLU | 99% | — |
| GPQA | 97% | — |
| SuperGPQA | 95% | — |
| MMLU-Pro | 81% | — |
| HLE | 32% | — |
| FrontierScience | 88% | — |
| SimpleQA | 95% | — |
| GPQA-D | — | 76.3% |
| MMLU-Pro (Arcee) | — | 83.4% |
| Instruction Following | ||
| IFEval | 89% | — |
| IFBench | — | 52.3% |
| Multilingual | ||
| MGSM | 92% | — |
| MMLU-ProX | 85% | — |
| Mathematics | ||
| AIME 2023 | 99% | — |
| AIME 2024 | 99% | — |
| AIME 2025 | 98% | — |
| HMMT Feb 2023 | 95% | — |
| HMMT Feb 2024 | 97% | — |
| HMMT Feb 2025 | 96% | — |
| BRUMO 2025 | 96% | — |
| MATH-500 | 92% | — |
| AIME25 (Arcee) | — | 96.3% |
Not fully yet. BenchLM is tracking both models, but the sourced benchmark breakdown for this comparison is still coming soon.
BenchLM only shows category winners and benchmark-level calls when we have sourced results that can be compared fairly. For these models, the public benchmark coverage is not complete enough yet.
Gemini 3 Pro Deep Think: Pricing unavailable Trinity-Large-Thinking: $0.25 input / $0.90 output per 1M tokens Both model pages still include creator, context window, reasoning mode, and other metadata while benchmark coverage fills in.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.