Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.
GPT-5.4 mini
58
Winner · 0/8 categoriesHolo2-4B
Coming soon
0/8 categoriesGPT-5.4 mini· Holo2-4B
Benchmark data for GPT-5.4 mini and Holo2-4B is coming soon on BenchLM.
BenchLM does not have sourced benchmark coverage for Holo2-4B yet. This comparison is currently limited to metadata such as context window, reasoning mode, and pricing where available.
GPT-5.4 mini has the larger context window at 400K, compared with 262K for Holo2-4B.
| Benchmark | GPT-5.4 mini | Holo2-4B |
|---|---|---|
| Agentic | ||
| Terminal-Bench 2.0 | 60% | — |
| OSWorld-Verified | 72.1% | — |
| MCP Atlas | 57.7% | — |
| Toolathlon | 42.9% | — |
| tau2-bench | 93.4% | — |
| Coding | ||
| SWE-bench Pro | 54.4% | — |
| Multimodal & Grounded | ||
| MMMU-Pro | 76.6% | — |
| MMMU-Pro w/ Python | 78% | — |
| OmniDocBench 1.5 | 0.1263 | — |
| Reasoning | ||
| MRCRv2 | 40.7% | — |
| MRCR v2 64K-128K | 47.7% | — |
| MRCR v2 128K-256K | 33.6% | — |
| Graphwalks BFS 128K | 76.3% | — |
| Graphwalks Parents 128K | 71.5% | — |
| Knowledge | ||
| GPQA | 88% | — |
| HLE | 41.5% | — |
| HLE w/o tools | 28.2% | — |
| Instruction Following | ||
| Coming soon | ||
| Multilingual | ||
| Coming soon | ||
| Mathematics | ||
| Coming soon | ||
Not fully yet. BenchLM is tracking both models, but the sourced benchmark breakdown for this comparison is still coming soon.
BenchLM only shows category winners and benchmark-level calls when we have sourced results that can be compared fairly. For these models, the public benchmark coverage is not complete enough yet.
GPT-5.4 mini: $0.75 input / $4.50 output per 1M tokens Both model pages still include creator, context window, reasoning mode, and other metadata while benchmark coverage fills in.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.