MedXpertQA Text (MedXpertQA (Text))

A medical multiple-choice benchmark spanning many specialties with 10 answer options per question.

Benchmark score on MedXpertQA (Text) — April 8, 2026

BenchLM mirrors the published score view for MedXpertQA (Text). Gemini 3.1 Pro leads the public snapshot at 71.5% , followed by GPT-5.4 (59.6%) and Muse Spark (52.6%). BenchLM does not use these results to rank models overall.

5 modelsKnowledgeCurrentDisplay onlyUpdated April 8, 2026

The published MedXpertQA (Text) snapshot is tightly clustered at the top: Gemini 3.1 Pro sits at 71.5%, while the third row is only 18.9 points behind. The broader top-10 spread is 21.3 points, so the benchmark still separates strong models even when the leaders cluster.

5 models have been evaluated on MedXpertQA (Text). The benchmark falls in the Knowledge category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MedXpertQA (Text) is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About MedXpertQA (Text)

Year

2026

Tasks

2,450 medical multiple-choice questions

Format

Medical MCQ

Difficulty

Professional medical knowledge

Meta describes the text variant as 2,450 specialty-spanning medical questions with answer choices A-J. BenchLM treats it as a display-only health benchmark because it is not part of the weighted core schema.

BenchLM freshness & provenance

Version

MedXpertQA (Text) 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (5 models)

1
71.5%
2
59.6%
3
52.6%
4
52.1%
5
50.2%

FAQ

What does MedXpertQA (Text) measure?

A medical multiple-choice benchmark spanning many specialties with 10 answer options per question.

Which model scores highest on MedXpertQA (Text)?

Gemini 3.1 Pro by Google currently leads with a score of 71.5% on MedXpertQA (Text).

How many models are evaluated on MedXpertQA (Text)?

5 AI models have been evaluated on MedXpertQA (Text) on BenchLM.

Last updated: April 8, 2026 · BenchLM version MedXpertQA (Text) 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.