Skip to main content

Berkeley Function Calling Leaderboard v4 (BFCL v4)

A function-calling benchmark for tool selection, schema adherence, and argument correctness.

Benchmark score on BFCL v4 — May 13, 2026

BenchLM mirrors the published score view for BFCL v4. ZAYA1-8B leads the public snapshot at 39.2% , followed by LFM2.5-VL-450M (21.1%). BenchLM does not use these results to rank models overall.

2 modelsAgenticCurrentDisplay onlyUpdated May 13, 2026

About BFCL v4

Year

2026

Tasks

Function-calling tasks

Format

Tool invocation and schema evaluation

Difficulty

Advanced tool use

BenchLM stores BFCL v4 as a display-only function-calling reference outside the current weighted core schema.

BenchLM freshness & provenance

Version

BFCL v4 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (2 models)

1
39.2%
2
21.1%

FAQ

What does BFCL v4 measure?

A function-calling benchmark for tool selection, schema adherence, and argument correctness.

Which model scores highest on BFCL v4?

ZAYA1-8B by Zyphra currently leads with a score of 39.2% on BFCL v4.

How many models are evaluated on BFCL v4?

2 AI models have been evaluated on BFCL v4 on BenchLM.

Compare Top Models on BFCL v4

Last updated: May 13, 2026 · BenchLM version BFCL v4 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.