BenchLM tracks K-Exaone by LG AI Research as a regional Korea model with dedicated Korean benchmark coverage.
BenchLM tracks K-Exaone as a regional Korea model. Its benchmark rows are published for direct inspection, but it is intentionally excluded from the public global leaderboard so local-market scores do not skew worldwide rankings.
K-Exaone is a proprietary model with a 256K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.
This profile currently has 5 of 60 tracked benchmarks. BenchLM uses discounted fallback estimates where necessary so missing categories do not collapse the overall score, but public sourced rows still carry more weight than inferred ones.
Creator
LG AI Research
Source Type
ProprietaryReasoning
ReasoningContext Window
256K
Market
Korea
Overall Score
Tracked separately
BenchLM tracks this model on the Korean benchmark views, but it is intentionally excluded from the public global leaderboard and global category rankings.
These scores are tracked separately from BenchLM's weighted global ranking. See the Korean benchmark leaderboards for cross-model comparisons.
K-Exaone is tracked as a regional Korea model on BenchLM. Its Korean benchmark scores are visible on the model profile and Korean leaderboards, but it is intentionally excluded from the global overall ranking so regional results do not distort worldwide comparisons.
K-Exaone has visible benchmark coverage in coding and programming, but BenchLM does not currently assign it a global category rank there.
K-Exaone has 4 Korean benchmark scores published on BenchLM. Use the Korean benchmark tables on this page or visit the dedicated Korean leaderboards for side-by-side regional comparisons.
Not yet. K-Exaone currently has 5 sourced benchmark scores out of the 60 benchmarks BenchLM tracks. BenchLM may use discounted fallback values for missing categories, but trustworthy public rows still carry more weight than inferred ones.
K-Exaone has a context window of 256K, which determines how much text it can process in a single interaction.
New model releases, benchmark scores, and leaderboard changes. Every Friday.
Free. Your signup is stored with a derived country code for compliance routing.