Exaone 4.0 32B Benchmark Scores & Performance

Korean-market model

BenchLM tracks Exaone 4.0 32B by LG AI Research as a regional Korea model with dedicated Korean benchmark coverage.

BenchLM tracks Exaone 4.0 32B as a regional Korea model. Its benchmark rows are published for direct inspection, but it is intentionally excluded from the public global leaderboard so local-market scores do not skew worldwide rankings.

Exaone 4.0 32B is a open weight model with a 128K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.

Exaone 4.0 32B sits inside the Exaone 4.0 family alongside Exaone 4.0 1.2B. This profile currently has 3 of 60 tracked benchmarks. BenchLM uses discounted fallback estimates where necessary so missing categories do not collapse the overall score, but public sourced rows still carry more weight than inferred ones.

Creator

LG AI Research

Source Type

Open Weight

Reasoning

Reasoning

Context Window

128K

Market

Korea

Overall Score

Tracked separately

Family & Lineage

Family

Exaone 4.0

32b

Sibling Models

Rankings Overview

BenchLM tracks this model on the Korean benchmark views, but it is intentionally excluded from the public global leaderboard and global category rankings.

Knowledge Benchmarks

MMLU-Pro
81.8%

Mathematics Benchmarks

AIME 2025
85.3%

Korean Benchmarks

These scores are tracked separately from BenchLM's weighted global ranking. See the Korean benchmark leaderboards for cross-model comparisons.

KMMLU
75.2%

Frequently Asked Questions

How does Exaone 4.0 32B perform overall in AI benchmarks?

Exaone 4.0 32B is tracked as a regional Korea model on BenchLM. Its Korean benchmark scores are visible on the model profile and Korean leaderboards, but it is intentionally excluded from the global overall ranking so regional results do not distort worldwide comparisons.

Is Exaone 4.0 32B good for knowledge and understanding?

Exaone 4.0 32B has visible benchmark coverage in knowledge and understanding, but BenchLM does not currently assign it a global category rank there.

Is Exaone 4.0 32B good for mathematics?

Exaone 4.0 32B has visible benchmark coverage in mathematics, but BenchLM does not currently assign it a global category rank there.

How does Exaone 4.0 32B perform on Korean benchmarks?

Exaone 4.0 32B has 1 Korean benchmark score published on BenchLM. Use the Korean benchmark tables on this page or visit the dedicated Korean leaderboards for side-by-side regional comparisons.

Is Exaone 4.0 32B open source?

Yes, Exaone 4.0 32B is an open weight model created by LG AI Research, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

Which sibling models are related to Exaone 4.0 32B?

Exaone 4.0 32B belongs to the Exaone 4.0 family. Related variants on BenchLM include Exaone 4.0 1.2B.

Does Exaone 4.0 32B have full benchmark coverage on BenchLM?

Not yet. Exaone 4.0 32B currently has 3 sourced benchmark scores out of the 60 benchmarks BenchLM tracks. BenchLM may use discounted fallback values for missing categories, but trustworthy public rows still carry more weight than inferred ones.

What is the context window size of Exaone 4.0 32B?

Exaone 4.0 32B has a context window of 128K, which determines how much text it can process in a single interaction.

Last updated: March 18, 2026

Weekly LLM Updates

New model releases, benchmark scores, and leaderboard changes. Every Friday.

Free. Your signup is stored with a derived country code for compliance routing.

More from LG AI Research