Skip to main content

KMMLU-Hard

A filtered hard subset of KMMLU containing ~5,000 questions that most models get wrong.

How BenchLM shows KMMLU-Hard right now

BenchLM is tracking KMMLU-Hard in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.

These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.

11 tracked modelsLocal tracked rowsAwaiting exact-source attachmentsDisplay only

Tracked score on KMMLU-Hard — April 10, 2026

BenchLM mirrors the published tracked score view for KMMLU-Hard. GPT-5.4 leads the public snapshot at 72.8% , followed by GPT-5 mini (60.6%) and GPT-5 nano (51.7%). BenchLM does not use these results to rank models overall.

11 modelsKorean BenchmarksKorean-language benchmarkCurrentDisplay onlyUpdated April 10, 2026

The published KMMLU-Hard snapshot is tightly clustered at the top: GPT-5.4 sits at 72.8%, while the third row is only 21.0 points behind. The broader top-10 spread is 48.2 points, so the benchmark still separates strong models even when the leaders cluster.

11 models have been evaluated on KMMLU-Hard. The benchmark falls in the Korean Benchmarks category. BenchLM tracks this category separately from its weighted global scoring system, so these results are best compared on the dedicated Korean benchmark views. KMMLU-Hard is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About KMMLU-Hard

Year

2025

Tasks

~5,000 questions

Format

Multiple choice questions

Difficulty

Advanced Korean reasoning

Provides strong signals for advanced frontier models attempting reasoning in Korean.

BenchLM freshness & provenance

Version

KMMLU-Hard 2025

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Tracked score table (11 models)

1
GPT-5.4gpt-5-4
72.8%
2
GPT-5 minigpt-5-mini
60.6%
3
GPT-5 nanogpt-5-nano
51.7%
4
GPT-5.2gpt-5-2
51.1%
5
GPT-5.1gpt-5-1
43.9%
6
GPT-4.1gpt-4-1
42.8%
7
GPT-4ogpt-4o
39.6%
8
GPT-4.1 minigpt-4-1-mini
35.6%
9
GPT-4 Turbogpt-4-turbo
30.6%
10
GPT-4o minigpt-4o-mini
24.6%
11
GPT-4.1 nanogpt-4-1-nano
24.3%

FAQ

What does KMMLU-Hard measure?

A filtered hard subset of KMMLU containing ~5,000 questions that most models get wrong.

Which model leads the published KMMLU-Hard snapshot?

GPT-5.4 currently leads the published KMMLU-Hard snapshot with a tracked score of 72.8%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on KMMLU-Hard?

11 AI models are included in BenchLM's mirrored KMMLU-Hard snapshot, based on the public leaderboard captured on April 10, 2026.

Last updated: April 10, 2026 · mirrored from the public benchmark leaderboard

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.