Skip to main content

Korean Massive Multitask Language Understanding (KMMLU)

Evaluates Korean expert-level knowledge across 45 subjects. 20% of questions require Korean cultural context.

How BenchLM shows KMMLU right now

BenchLM is tracking KMMLU in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.

These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.

17 tracked modelsLocal tracked rowsAwaiting exact-source attachmentsDisplay only

Tracked score on KMMLU — April 16, 2026

BenchLM mirrors the published tracked score view for KMMLU. Claude Sonnet 4.6 leads the public snapshot at 85% , followed by GPT-5.4 (83.7%) and Solar Pro 2 (80.1%). BenchLM does not use these results to rank models overall.

17 modelsKorean BenchmarksKorean-language benchmarkRefreshingDisplay onlyUpdated April 16, 2026

The published KMMLU snapshot is tightly clustered at the top: Claude Sonnet 4.6 sits at 85%, while the third row is only 4.9 points behind. The broader top-10 spread is 15.7 points, so the benchmark still separates strong models even when the leaders cluster.

17 models have been evaluated on KMMLU. The benchmark falls in the Korean Benchmarks category. BenchLM tracks this category separately from its weighted global scoring system, so these results are best compared on the dedicated Korean benchmark views. KMMLU is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About KMMLU

Year

2024

Tasks

35,030 questions

Format

Multiple choice questions

Difficulty

Elementary to professional level in Korean

Tests human-level understanding and reasoning in the Korean language across diverse subjects.

BenchLM freshness & provenance

Version

KMMLU 2024

Refresh cadence

Annual

Staleness state

Refreshing

Question availability

Public benchmark set

RefreshingDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Tracked score table (17 models)

1
Claude Sonnet 4.6claude-sonnet-4-6
85%
2
GPT-5.4gpt-5-4
83.7%
3
Solar Pro 2solar-pro-2
80.1%
4
79.5%
5
HyperClova X Think 32Bhyperclova-x-think
78.4%
6
78%
7
GPT-5 minigpt-5-mini
76.5%
8
Exaone 4.0 32Bexaone-4-0-32b
75.2%
9
GPT-5.2gpt-5-2
71.5%
10
GPT-5 nanogpt-5-nano
69.3%
11
GPT-5.1gpt-5-1
65.9%
12
GPT-4.1gpt-4-1
65.5%
13
GPT-4ogpt-4o
64.3%
14
GPT-4.1 minigpt-4-1-mini
59.3%
15
GPT-4 Turbogpt-4-turbo
58.8%
16
GPT-4o minigpt-4o-mini
52.6%
17
GPT-4.1 nanogpt-4-1-nano
48.6%

FAQ

What does KMMLU measure?

Evaluates Korean expert-level knowledge across 45 subjects. 20% of questions require Korean cultural context.

Which model leads the published KMMLU snapshot?

Claude Sonnet 4.6 currently leads the published KMMLU snapshot with a tracked score of 85%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on KMMLU?

17 AI models are included in BenchLM's mirrored KMMLU snapshot, based on the public leaderboard captured on April 16, 2026.

Last updated: April 16, 2026 · mirrored from the public benchmark leaderboard

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.