A repository-level Lean 4 proof engineering benchmark that measures whether a model can complete formal proofs and correctly define new mathematical concepts inside realistic FLT project pull requests.
As of March 2026, Claude Opus 4.6 leads the FLTEval leaderboard with 39.6% , followed by Claude Sonnet 4.6 (23.7%) and Claude Haiku 4.5 (23%).
Claude Opus 4.6
Anthropic
Claude Sonnet 4.6
Anthropic
Claude Haiku 4.5
Anthropic
According to BenchLM.ai, Claude Opus 4.6 leads the FLTEval benchmark with a score of 39.6%, followed by Claude Sonnet 4.6 (23.7%) and Claude Haiku 4.5 (23%). The scores show moderate spread, with meaningful differences between the top tier and mid-tier models.
4 models have been evaluated on FLTEval. The benchmark falls in the Coding category, which carries a 20% weight in BenchLM.ai's overall scoring system. FLTEval is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.
Year
2026
Tasks
FLT project pull requests
Format
Lean 4 repository task completion
Difficulty
Formal verification / proof engineering
FLTEval is designed to move evaluation beyond isolated competition-math problems. Instead of proving one-off statements, models must operate inside realistic formal repositories and finish pull-request-style Lean 4 work with Lean itself acting as a verifier.
Leanstral: Open-Source foundation for trustworthy vibe-codingA repository-level Lean 4 proof engineering benchmark that measures whether a model can complete formal proofs and correctly define new mathematical concepts inside realistic FLT project pull requests.
Claude Opus 4.6 by Anthropic currently leads with a score of 39.6% on FLTEval.
4 AI models have been evaluated on FLTEval on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.