Exaone 4.0 32B
BenchLM tracks Exaone 4.0 32B as a regional Korea model. Its benchmark rows are published for direct inspection, but it is intentionally excluded from the public global leaderboard so local-market scores do not skew worldwide rankings.
Exaone 4.0 32B is a open weight model with a 128K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.
Exaone 4.0 32B sits inside the Exaone 4.0 family alongside Exaone 4.0 1.2B. This profile currently has 2 of 186 tracked benchmarks. BenchLM only exposes non-generated benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.
Ranking Distribution
Category rank across 2 benchmark categories — sorted by best rank
Category Performance
Scores across all benchmark categories (0-100 scale)
Category Breakdown
Agentic
Coding
Reasoning
Knowledge
Math
Multilingual
Multimodal
Inst. Following
Benchmark Details
Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.
Compare This Model
See how Exaone 4.0 32B stacks up against similar models
Frequently Asked Questions
How does Exaone 4.0 32B perform overall in AI benchmarks?
Exaone 4.0 32B is tracked as a regional Korea model on BenchLM. Its Korean benchmark scores are visible on the model profile and Korean leaderboards, but it is intentionally excluded from the global overall ranking so regional results do not distort worldwide comparisons.
Is Exaone 4.0 32B good for knowledge and understanding?
Exaone 4.0 32B has visible benchmark coverage in knowledge and understanding, but BenchLM does not currently assign it a global category rank there.
Is Exaone 4.0 32B good for mathematics?
Exaone 4.0 32B has visible benchmark coverage in mathematics, but BenchLM does not currently assign it a global category rank there.
Is Exaone 4.0 32B open source?
Yes, Exaone 4.0 32B is an open weight model created by LG AI Research, meaning it can be downloaded and run locally or fine-tuned for specific use cases.
Which sibling models are related to Exaone 4.0 32B?
Exaone 4.0 32B belongs to the Exaone 4.0 family. Related variants on BenchLM include Exaone 4.0 1.2B.
Does Exaone 4.0 32B have full benchmark coverage on BenchLM?
Not yet. Exaone 4.0 32B currently has 2 published benchmark scores out of the 186 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.
What is the context window size of Exaone 4.0 32B?
Exaone 4.0 32B has a context window of 128K, which determines how much text it can process in a single interaction.
Related Resources
Don't miss the next GPT moment
Which models moved up, what’s new, and what it costs. One email a week, 3-min read.
Free. One email per week.