K-Exaone Benchmark Scores & Performance

Korean-market model

BenchLM tracks K-Exaone by LG AI Research as a regional Korea model with dedicated Korean benchmark coverage.

BenchLM tracks K-Exaone as a regional Korea model. Its benchmark rows are published for direct inspection, but it is intentionally excluded from the public global leaderboard so local-market scores do not skew worldwide rankings.

K-Exaone is a proprietary model with a 256K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.

This profile currently has 5 of 60 tracked benchmarks. BenchLM uses discounted fallback estimates where necessary so missing categories do not collapse the overall score, but public sourced rows still carry more weight than inferred ones.

Creator

LG AI Research

Source Type

Proprietary

Reasoning

Reasoning

Context Window

256K

Market

Korea

Overall Score

Tracked separately

Rankings Overview

BenchLM tracks this model on the Korean benchmark views, but it is intentionally excluded from the public global leaderboard and global category rankings.

Coding Benchmarks

SWE-bench Verified
49.4%

Korean Benchmarks

These scores are tracked separately from BenchLM's weighted global ranking. See the Korean benchmark leaderboards for cross-model comparisons.

KMMLU-Pro
67.3%
CLIcK
83.9%
KoBALT
61.8%
HRM8K
90.9%

Frequently Asked Questions

How does K-Exaone perform overall in AI benchmarks?

K-Exaone is tracked as a regional Korea model on BenchLM. Its Korean benchmark scores are visible on the model profile and Korean leaderboards, but it is intentionally excluded from the global overall ranking so regional results do not distort worldwide comparisons.

Is K-Exaone good for coding and programming?

K-Exaone has visible benchmark coverage in coding and programming, but BenchLM does not currently assign it a global category rank there.

How does K-Exaone perform on Korean benchmarks?

K-Exaone has 4 Korean benchmark scores published on BenchLM. Use the Korean benchmark tables on this page or visit the dedicated Korean leaderboards for side-by-side regional comparisons.

Does K-Exaone have full benchmark coverage on BenchLM?

Not yet. K-Exaone currently has 5 sourced benchmark scores out of the 60 benchmarks BenchLM tracks. BenchLM may use discounted fallback values for missing categories, but trustworthy public rows still carry more weight than inferred ones.

What is the context window size of K-Exaone?

K-Exaone has a context window of 256K, which determines how much text it can process in a single interaction.

Last updated: March 18, 2026

Weekly LLM Updates

New model releases, benchmark scores, and leaderboard changes. Every Friday.

Free. Your signup is stored with a derived country code for compliance routing.

More from LG AI Research