MedXpertQA Multimodal (MedXpertQA (MM))

A multimodal medical multiple-choice benchmark covering clinical images such as X-rays, histology, and dermatology.

Benchmark score on MedXpertQA (MM) — April 8, 2026

BenchLM mirrors the published score view for MedXpertQA (MM). Gemini 3.1 Pro leads the public snapshot at 81.3% , followed by Muse Spark (78.4%) and GPT-5.4 (77.1%). BenchLM does not use these results to rank models overall.

5 modelsMultimodal & GroundedCurrentDisplay onlyUpdated April 8, 2026

The published MedXpertQA (MM) snapshot is tightly clustered at the top: Gemini 3.1 Pro sits at 81.3%, while the third row is only 4.2 points behind. The broader top-10 spread is 16.5 points, so the benchmark still separates strong models even when the leaders cluster.

5 models have been evaluated on MedXpertQA (MM). The benchmark falls in the Multimodal & Grounded category. This category carries a 12% weight in BenchLM.ai's overall scoring system. MedXpertQA (MM) is currently displayed for reference but excluded from the scoring formula, so it does not directly affect overall rankings.

About MedXpertQA (MM)

Year

2026

Tasks

2,000 multimodal medical questions

Format

Medical visual MCQ

Difficulty

Clinical multimodal reasoning

Meta describes the multimodal MedXpertQA variant as 2,000 clinically grounded medical questions with five answer choices. BenchLM stores it as a display-only health and multimodal reference.

BenchLM freshness & provenance

Version

MedXpertQA (MM) 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (5 models)

1
81.3%
2
78.4%
3
77.1%
4
65.8%
5
64.8%

FAQ

What does MedXpertQA (MM) measure?

A multimodal medical multiple-choice benchmark covering clinical images such as X-rays, histology, and dermatology.

Which model scores highest on MedXpertQA (MM)?

Gemini 3.1 Pro by Google currently leads with a score of 81.3% on MedXpertQA (MM).

How many models are evaluated on MedXpertQA (MM)?

5 AI models have been evaluated on MedXpertQA (MM) on BenchLM.

Last updated: April 8, 2026 · BenchLM version MedXpertQA (MM) 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.