Benchmark analysis of LFM2.5-1.2B-Instruct by LiquidAI across 32 sourced tests on BenchLM.
According to BenchLM.ai, LFM2.5-1.2B-Instruct ranks #120 out of 123 models with an overall score of 30/100. While not a frontier model, it offers specific advantages depending on the use case.
LFM2.5-1.2B-Instruct is a proprietary model with a 32K token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.
LFM2.5-1.2B-Instruct sits inside the LFM2.5 1.2B family alongside LFM2.5-1.2B-Thinking.
Its strongest category is Instruction Following (#75), while its weakest is Coding (#120). This performance profile makes it a well-rounded choice across a range of tasks.
Creator
LiquidAI
Source Type
ProprietaryReasoning
Non-ReasoningContext Window
32K
Overall Score
Arena Elo
1033
LFM2.5-1.2B-Instruct ranks #120 out of 123 models with an overall score of 30. It is created by LiquidAI and features a 32K context window.
LFM2.5-1.2B-Instruct ranks #119 out of 123 models in knowledge and understanding benchmarks with an average score of 26. There are stronger options in this category.
LFM2.5-1.2B-Instruct ranks #120 out of 123 models in coding and programming benchmarks with an average score of 7.2. There are stronger options in this category.
LFM2.5-1.2B-Instruct ranks #113 out of 123 models in mathematics benchmarks with an average score of 37. There are stronger options in this category.
LFM2.5-1.2B-Instruct ranks #119 out of 123 models in reasoning and logic benchmarks with an average score of 32.1. There are stronger options in this category.
LFM2.5-1.2B-Instruct ranks #120 out of 123 models in agentic tool use and computer tasks benchmarks with an average score of 25.7. There are stronger options in this category.
LFM2.5-1.2B-Instruct ranks #118 out of 123 models in multimodal and grounded tasks benchmarks with an average score of 32.4. There are stronger options in this category.
LFM2.5-1.2B-Instruct ranks #75 out of 123 models in instruction following benchmarks with an average score of 80. There are stronger options in this category.
LFM2.5-1.2B-Instruct ranks #99 out of 123 models in multilingual tasks benchmarks with an average score of 60.7. There are stronger options in this category.
LFM2.5-1.2B-Instruct belongs to the LFM2.5 1.2B family. Related variants on BenchLM include LFM2.5-1.2B-Thinking.
LFM2.5-1.2B-Instruct has a context window of 32K, which determines how much text it can process in a single interaction.
New model releases, benchmark scores, and leaderboard changes. Every Friday.
Free. Your signup is stored with a derived country code for compliance routing.