Skip to main content

C-Eval

A Chinese-language academic and professional benchmark spanning humanities, social science, STEM, and applied subjects.

Benchmark score on C-Eval — April 10, 2026

BenchLM mirrors the published score view for C-Eval. Qwen3.6 Plus leads the public snapshot at 93.3% , followed by Qwen3.5 397B (93%) and Claude Opus 4.5 (92.2%). BenchLM does not use these results to rank models overall.

3 modelsKnowledgeStaleDisplay onlyUpdated April 10, 2026

The published C-Eval snapshot is tightly clustered at the top: Qwen3.6 Plus sits at 93.3%, while the third row is only 1.1 points behind. The broader top-10 spread is 1.1 points, so many of the published scores sit in a relatively narrow band.

3 models have been evaluated on C-Eval. The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. C-Eval is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About C-Eval

Year

2023

Tasks

Chinese academic and professional exams

Format

Multiple choice questions

Difficulty

High school to professional level

C-Eval is one of the clearest public signals for non-English academic knowledge performance. It tests whether a model can sustain strong factual recall and reasoning under Chinese-language exam conditions across many domains.

BenchLM freshness & provenance

Version

C-Eval 2023

Refresh cadence

Static

Staleness state

Stale

Question availability

Public benchmark set

StaleDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (3 models)

1
93.3%
2
93%
3
92.2%

FAQ

What does C-Eval measure?

A Chinese-language academic and professional benchmark spanning humanities, social science, STEM, and applied subjects.

Which model scores highest on C-Eval?

Qwen3.6 Plus by Alibaba currently leads with a score of 93.3% on C-Eval.

How many models are evaluated on C-Eval?

3 AI models have been evaluated on C-Eval on BenchLM.

Last updated: April 10, 2026 · BenchLM version C-Eval 2023

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.