A function-calling benchmark for tool selection, schema adherence, and argument correctness.
BenchLM mirrors the published score view for BFCL v4. LFM2.5-VL-450M leads the public snapshot at 21.1%. BenchLM does not use these results to rank models overall.
Year
2026
Tasks
Function-calling tasks
Format
Tool invocation and schema evaluation
Difficulty
Advanced tool use
BenchLM stores BFCL v4 as a display-only function-calling reference outside the current weighted core schema.
Version
BFCL v4 2026
Refresh cadence
Quarterly
Staleness state
Current
Question availability
Public benchmark set
BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.
A function-calling benchmark for tool selection, schema adherence, and argument correctness.
LFM2.5-VL-450M by LiquidAI currently leads with a score of 21.1% on BFCL v4.
1 AI models have been evaluated on BFCL v4 on BenchLM.
For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.
Free. No spam. Unsubscribe anytime.