A repository-level Lean 4 proof engineering benchmark that measures whether a model can complete formal proofs and correctly define new mathematical concepts inside realistic FLT project pull requests.
BenchLM is tracking FLTEval in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.
These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.
BenchLM mirrors the published tracked score view for FLTEval. Claude Opus 4.6 leads the public snapshot at 39.6% , followed by Claude Sonnet 4.6 (23.7%) and Claude Haiku 4.5 (23%). BenchLM does not use these results to rank models overall.
Claude Opus 4.6
Anthropic
claude-opus-4-6
Claude Sonnet 4.6
Anthropic
claude-sonnet-4-6
Claude Haiku 4.5
Anthropic
claude-haiku-4-5
The published FLTEval snapshot is tightly clustered at the top: Claude Opus 4.6 sits at 39.6%, while the third row is only 16.6 points behind. The broader top-10 spread is 17.7 points, so the benchmark still separates strong models even when the leaders cluster.
4 models have been evaluated on FLTEval. The benchmark falls in the Coding category. This category carries a 20% weight in BenchLM.ai's overall scoring system. FLTEval is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
FLT project pull requests
Format
Lean 4 repository task completion
Difficulty
Formal verification / proof engineering
FLTEval is designed to move evaluation beyond isolated competition-math problems. Instead of proving one-off statements, models must operate inside realistic formal repositories and finish pull-request-style Lean 4 work with Lean itself acting as a verifier.
Version
FLTEval 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A repository-level Lean 4 proof engineering benchmark that measures whether a model can complete formal proofs and correctly define new mathematical concepts inside realistic FLT project pull requests.
Claude Opus 4.6 currently leads the published FLTEval snapshot with a tracked score of 39.6%. BenchLM shows this benchmark for display only and does not use it in overall rankings.
4 AI models are included in BenchLM's mirrored FLTEval snapshot, based on the public leaderboard captured on May 1, 2026.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.