Head-to-head comparison across 0benchmark categories. Overall scores shown here use BenchLM's provisional ranking lane.
GPT-5.1-Codex
18
Trinity-Large-Thinking
12
Benchmark data for GPT-5.1-Codex and Trinity-Large-Thinking is coming soon on BenchLM.
GPT-5.1-Codex
Trinity-Large-Thinking
N/A
$0.25 / $0.9
N/A
N/A
N/A
N/A
400K
512K
Benchmark data for GPT-5.1-Codex and Trinity-Large-Thinking is coming soon on BenchLM.
BenchLM has partial data for these models, but not enough overlapping benchmark coverage to produce a fair score-level comparison yet.
Trinity-Large-Thinking has the larger context window at 512K, compared with 400K for GPT-5.1-Codex.
Not fully yet. BenchLM is tracking both models, but the sourced benchmark breakdown for this comparison is still coming soon.
BenchLM only shows category winners and benchmark-level calls when we have sourced results that can be compared fairly. For these models, the public benchmark coverage is not complete enough yet.
Trinity-Large-Thinking: $0.25 input / $0.90 output per 1M tokens Both model pages still include creator, context window, reasoning mode, and other metadata while benchmark coverage fills in.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.