LFM2.5-1.2B-Thinking Benchmark Scores & Performance

Benchmark analysis of LFM2.5-1.2B-Thinking by LiquidAI across 32 sourced tests on BenchLM.

According to BenchLM.ai, LFM2.5-1.2B-Thinking ranks #115 out of 123 models with an overall score of 33/100. While not a frontier model, it offers specific advantages depending on the use case.

LFM2.5-1.2B-Thinking is a proprietary model with a 32K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.

LFM2.5-1.2B-Thinking sits inside the LFM2.5 1.2B family alongside LFM2.5-1.2B-Instruct.

Its strongest category is Instruction Following (#90), while its weakest is Knowledge (#118). This performance profile makes it a well-rounded choice across a range of tasks.

Creator

LiquidAI

Source Type

Proprietary

Reasoning

Reasoning

Context Window

32K

Overall Score

33#115 of 123

Arena Elo

1043

Family & Lineage

Family

LFM2.5 1.2B

Reasoning

Canonical Entry

LFM2.5-1.2B-Instruct

Knowledge Benchmarks

MMLU
27
GPQA
26
SuperGPQA
24
OpenBookQA
22
MMLU-Pro
51
HLE
2
FrontierScience
31

Coding Benchmarks

HumanEval
17
SWE-bench Verified
10
LiveCodeBench
9
SWE-bench Pro
7

Mathematics Benchmarks

AIME 2023
28
AIME 2024
30
AIME 2025
29
HMMT Feb 2023
24
HMMT Feb 2024
26
HMMT Feb 2025
25
BRUMO 2025
27
MATH-500
61

Reasoning Benchmarks

SimpleQA
29
MuSR
31
BBH
67
LongBench v2
39
MRCRv2
42

Agentic Benchmarks

Terminal-Bench 2.0
34
BrowseComp
37
OSWorld-Verified
32

Multimodal & Grounded Benchmarks

MMMU-Pro
27
OfficeQA Pro
39

Instruction Following Benchmarks

IFEval
72

Multilingual Benchmarks

MGSM
62
MMLU-ProX
60

Frequently Asked Questions

How does LFM2.5-1.2B-Thinking perform overall in AI benchmarks?

LFM2.5-1.2B-Thinking ranks #115 out of 123 models with an overall score of 33. It is created by LiquidAI and features a 32K context window.

Is LFM2.5-1.2B-Thinking good for knowledge and understanding?

LFM2.5-1.2B-Thinking ranks #118 out of 123 models in knowledge and understanding benchmarks with an average score of 27. There are stronger options in this category.

Is LFM2.5-1.2B-Thinking good for coding and programming?

LFM2.5-1.2B-Thinking ranks #118 out of 123 models in coding and programming benchmarks with an average score of 8.2. There are stronger options in this category.

Is LFM2.5-1.2B-Thinking good for mathematics?

LFM2.5-1.2B-Thinking ranks #110 out of 123 models in mathematics benchmarks with an average score of 42.3. There are stronger options in this category.

Is LFM2.5-1.2B-Thinking good for reasoning and logic?

LFM2.5-1.2B-Thinking ranks #113 out of 123 models in reasoning and logic benchmarks with an average score of 38.4. There are stronger options in this category.

Is LFM2.5-1.2B-Thinking good for agentic tool use and computer tasks?

LFM2.5-1.2B-Thinking ranks #105 out of 123 models in agentic tool use and computer tasks benchmarks with an average score of 34.1. There are stronger options in this category.

Is LFM2.5-1.2B-Thinking good for multimodal and grounded tasks?

LFM2.5-1.2B-Thinking ranks #115 out of 123 models in multimodal and grounded tasks benchmarks with an average score of 32.4. There are stronger options in this category.

Is LFM2.5-1.2B-Thinking good for instruction following?

LFM2.5-1.2B-Thinking ranks #90 out of 123 models in instruction following benchmarks with an average score of 72. There are stronger options in this category.

Is LFM2.5-1.2B-Thinking good for multilingual tasks?

LFM2.5-1.2B-Thinking ranks #97 out of 123 models in multilingual tasks benchmarks with an average score of 60.7. There are stronger options in this category.

Which sibling models are related to LFM2.5-1.2B-Thinking?

LFM2.5-1.2B-Thinking belongs to the LFM2.5 1.2B family. Related variants on BenchLM include LFM2.5-1.2B-Instruct.

What is the context window size of LFM2.5-1.2B-Thinking?

LFM2.5-1.2B-Thinking has a context window of 32K, which determines how much text it can process in a single interaction.

Last updated: March 12, 2026

Weekly LLM Updates

New model releases, benchmark scores, and leaderboard changes. Every Friday.

Free. Your signup is stored with a derived country code for compliance routing.