A suite of 23 challenging tasks from the BIG-Bench collaborative benchmark where prior language models failed to exceed average human performance, even with chain-of-thought prompting.
BenchLM is tracking BBH in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.
These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.
BenchLM mirrors the published tracked score view for BBH. GPT-5.3 Codex leads the public snapshot at 98% , followed by GPT-5.2 Pro (98%) and GPT-5.4 (97%). BenchLM does not use these results to rank models overall.
GPT-5.3 Codex
OpenAI
gpt-5-3-codex
GPT-5.2 Pro
OpenAI
gpt-5-2-pro
GPT-5.4
OpenAI
gpt-5-4
The published BBH snapshot is tightly clustered at the top: GPT-5.3 Codex sits at 98%, while the third row is only 1.0 points behind. The broader top-10 spread is 4.0 points, so many of the published scores sit in a relatively narrow band.
116 models have been evaluated on BBH. The benchmark falls in the Reasoning category. This category carries a 17% weight in BenchLM.ai's overall scoring system. BBH is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2022
Tasks
23 tasks
Format
Mixed reasoning tasks
Difficulty
Advanced reasoning
BBH focuses on 23 tasks from BIG-Bench that remain challenging for language models. Tasks include logical deduction, tracking shuffled objects, causal judgement, and other complex reasoning scenarios.
Version
BBH 2022
Refresh cadence
Static
Staleness state
Stale
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A suite of 23 challenging tasks from the BIG-Bench collaborative benchmark where prior language models failed to exceed average human performance, even with chain-of-thought prompting.
GPT-5.3 Codex currently leads the published BBH snapshot with a tracked score of 98%. BenchLM shows this benchmark for display only and does not use it in overall rankings.
116 AI models are included in BenchLM's mirrored BBH snapshot, based on the public leaderboard captured on April 21, 2026.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.