Skip to main content

Terminal-Bench 2.0

A benchmark for agentic software engineering tasks executed in real terminal environments. DeepSeek reports it in the agentic section, while BenchLM also mirrors it in coding for models that publish it as a developer-task signal.

Benchmark score on Terminal-Bench 2.0 — April 29, 2026

BenchLM mirrors the published score view for Terminal-Bench 2.0. GPT-5.5 leads the public snapshot at 82.0% , followed by Claude Opus 4.7 (Adaptive) (69.4%) and MiMo-V2.5-Pro (68.4%). BenchLM does not use these results to rank models overall.

17 modelsCodingCurrentDisplay onlyUpdated April 29, 2026

The published Terminal-Bench 2.0 snapshot is tightly clustered at the top: GPT-5.5 sits at 82.0%, while the third row is only 13.6 points behind. The broader top-10 spread is 22.7 points, so the benchmark still separates strong models even when the leaders cluster.

17 models have been evaluated on Terminal-Bench 2.0. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. Terminal-Bench 2.0 is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About Terminal-Bench 2.0

Year

2026

Tasks

Terminal-based software tasks

Format

Interactive CLI agent evaluation

Difficulty

Professional software engineering

Terminal-Bench 2.0 focuses on realistic CLI and repository workflows rather than toy code generation. BenchLM keeps coding-category copies display-only unless the scoring weights include them.

BenchLM freshness & provenance

Version

Terminal-Bench 2

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (17 models)

1
82.0%
2
69.4%
3
68.4%
4
67.9%
5
66.7%
6
65.8%
7
65.4%
8
63.3%
9
61.7%
10
59.3%
11
59.1%
12
56.9%
13
56.6%
14
51.5%
15
49.1%
16
40.7%
17
30.1%

FAQ

What does Terminal-Bench 2.0 measure?

A benchmark for agentic software engineering tasks executed in real terminal environments. DeepSeek reports it in the agentic section, while BenchLM also mirrors it in coding for models that publish it as a developer-task signal.

Which model scores highest on Terminal-Bench 2.0?

GPT-5.5 by OpenAI currently leads with a score of 82.0% on Terminal-Bench 2.0.

How many models are evaluated on Terminal-Bench 2.0?

17 AI models have been evaluated on Terminal-Bench 2.0 on BenchLM.

Last updated: April 29, 2026 · BenchLM version Terminal-Bench 2

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.