LFM2-24B-A2B Benchmark Scores & Performance

Benchmark analysis of LFM2-24B-A2B by LiquidAI across 32 sourced tests on BenchLM.

According to BenchLM.ai, LFM2-24B-A2B ranks #102 out of 123 models with an overall score of 38/100. While not a frontier model, it offers specific advantages depending on the use case.

LFM2-24B-A2B is a proprietary model with a 32K token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.

Its strongest category is Mathematics (#92), while its weakest is Agentic (#109). This performance profile makes it particularly strong for mathematical reasoning, scientific computing, and quantitative analysis.

Creator

LiquidAI

Source Type

Proprietary

Reasoning

Non-Reasoning

Context Window

32K

Overall Score

38#102 of 123

Arena Elo

1062

Knowledge Benchmarks

MMLU
46
GPQA
45
SuperGPQA
43
OpenBookQA
41
MMLU-Pro
51
HLE
4
FrontierScience
43

Coding Benchmarks

HumanEval
42
SWE-bench Verified
18
LiveCodeBench
17
SWE-bench Pro
19

Mathematics Benchmarks

AIME 2023
46
AIME 2024
48
AIME 2025
47
HMMT Feb 2023
42
HMMT Feb 2024
44
HMMT Feb 2025
43
BRUMO 2025
45
MATH-500
57

Reasoning Benchmarks

SimpleQA
44
MuSR
42
BBH
63
LongBench v2
48
MRCRv2
45

Agentic Benchmarks

Terminal-Bench 2.0
30
BrowseComp
38
OSWorld-Verified
34

Multimodal & Grounded Benchmarks

MMMU-Pro
39
OfficeQA Pro
45

Instruction Following Benchmarks

IFEval
68

Multilingual Benchmarks

MGSM
64
MMLU-ProX
60

Frequently Asked Questions

How does LFM2-24B-A2B perform overall in AI benchmarks?

LFM2-24B-A2B ranks #102 out of 123 models with an overall score of 38. It is created by LiquidAI and features a 32K context window.

Is LFM2-24B-A2B good for knowledge and understanding?

LFM2-24B-A2B ranks #99 out of 123 models in knowledge and understanding benchmarks with an average score of 35.6. There are stronger options in this category.

Is LFM2-24B-A2B good for coding and programming?

LFM2-24B-A2B ranks #96 out of 123 models in coding and programming benchmarks with an average score of 18. There are stronger options in this category.

Is LFM2-24B-A2B good for mathematics?

LFM2-24B-A2B ranks #92 out of 123 models in mathematics benchmarks with an average score of 50.4. There are stronger options in this category.

Is LFM2-24B-A2B good for reasoning and logic?

LFM2-24B-A2B ranks #99 out of 123 models in reasoning and logic benchmarks with an average score of 46.6. There are stronger options in this category.

Is LFM2-24B-A2B good for agentic tool use and computer tasks?

LFM2-24B-A2B ranks #109 out of 123 models in agentic tool use and computer tasks benchmarks with an average score of 33.4. There are stronger options in this category.

Is LFM2-24B-A2B good for multimodal and grounded tasks?

LFM2-24B-A2B ranks #102 out of 123 models in multimodal and grounded tasks benchmarks with an average score of 41.7. There are stronger options in this category.

Is LFM2-24B-A2B good for instruction following?

LFM2-24B-A2B ranks #100 out of 123 models in instruction following benchmarks with an average score of 68. There are stronger options in this category.

Is LFM2-24B-A2B good for multilingual tasks?

LFM2-24B-A2B ranks #95 out of 123 models in multilingual tasks benchmarks with an average score of 61.4. There are stronger options in this category.

What is the context window size of LFM2-24B-A2B?

LFM2-24B-A2B has a context window of 32K, which determines how much text it can process in a single interaction.

Last updated: March 12, 2026

Weekly LLM Updates

New model releases, benchmark scores, and leaderboard changes. Every Friday.

Free. Your signup is stored with a derived country code for compliance routing.