LFM2.5-VL-450M

LiquidAICurrentReleased Apr 8, 2026
Overall Score
Unranked
Arena Elo
N/A
Categories Ranked
1of 8
Price (1M tokens)
$0 in / $0 out
Speed
N/A
Context
128K
Open WeightNon-Reasoning
Confidence
vl

BenchLM is tracking LFM2.5-VL-450M, but this profile is currently excluded from the public leaderboard because it still lacks enough non-generated benchmark coverage to rank safely. Only non-generated public benchmark rows appear below.

LFM2.5-VL-450M is a open weight model with a 128K token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.

This profile currently has 7 of 151 tracked benchmarks. BenchLM only exposes non-generated benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.

Its strongest category is Instruction Following (#112). This performance profile makes it a well-rounded choice across a range of tasks.

Ranking Distribution

Category rank across 1 benchmark categories — sorted by best rank

Category Performance

Scores across all benchmark categories (0-100 scale)

Category Breakdown

Agentic

0.0/ 100
Weight: 22%1 benchmark
Terminal-Bench 2.0BrowseCompOSWorld-VerifiedGAIATAU-benchWebArena

Coding

0.0/ 100
Weight: 20%0 benchmarks
SWE-bench VerifiedLiveCodeBenchSWE-bench ProSWE-RebenchSciCode

Reasoning

0.0/ 100
Weight: 17%0 benchmarks
MuSRLongBench v2MRCRv2ARC-AGI-2

Knowledge

21.6/ 100
Weight: 12%2 benchmarks
GPQASuperGPQAMMLU-ProHLEFrontierScienceSimpleQA

Math

0.0/ 100
Weight: 5%0 benchmarks
AIME 2025BRUMO 2025MATH-500FrontierMath

Multilingual

0.0/ 100
Weight: 7%0 benchmarks
MGSMMMLU-ProX

Multimodal

0.0/ 100
Weight: 12%3 benchmarks
MMMU-ProOfficeQA Pro

Inst. Following

#112
61.2/ 100
Weight: 5%1 benchmark
IFEvalIFBench

Benchmark Details

Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.

Frequently Asked Questions

How does LFM2.5-VL-450M perform overall in AI benchmarks?

LFM2.5-VL-450M has 7 published benchmark scores on BenchLM, but it does not yet have enough non-generated coverage to receive a global overall rank.

Is LFM2.5-VL-450M good for knowledge and understanding?

LFM2.5-VL-450M has visible benchmark coverage in knowledge and understanding, but BenchLM does not currently assign it a global category rank there.

Is LFM2.5-VL-450M good for agentic tool use and computer tasks?

LFM2.5-VL-450M has visible benchmark coverage in agentic tool use and computer tasks, but BenchLM does not currently assign it a global category rank there.

Is LFM2.5-VL-450M good for multimodal and grounded tasks?

LFM2.5-VL-450M has visible benchmark coverage in multimodal and grounded tasks, but BenchLM does not currently assign it a global category rank there.

Is LFM2.5-VL-450M good for instruction following?

LFM2.5-VL-450M ranks #112 out of 106 models in instruction following benchmarks with an average score of 61.2. There are stronger options in this category.

Is LFM2.5-VL-450M open source?

Yes, LFM2.5-VL-450M is an open weight model created by LiquidAI, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

Does LFM2.5-VL-450M have full benchmark coverage on BenchLM?

Not yet. LFM2.5-VL-450M currently has 7 published benchmark scores out of the 151 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.

What is the context window size of LFM2.5-VL-450M?

LFM2.5-VL-450M has a context window of 128K, which determines how much text it can process in a single interaction.

Last updated: April 8, 2026 · Runtime metrics stay blank until BenchLM has a sourced snapshot.

Weekly LLM Updates

New model releases, benchmark scores, and leaderboard changes. Every Friday.

Free. Your signup is stored with a derived country code for compliance routing.