TruthfulQA

A benchmark designed to measure whether language models produce truthful answers instead of repeating common misconceptions or misleading falsehoods.

How BenchLM shows TruthfulQA right now

BenchLM is tracking TruthfulQA in the local dataset, but exact-source verification records for these rows are still being attached. To avoid a blank benchmark page, BenchLM shows the current tracked rows below as a display-only reference table.

These tracked rows are useful for inspection and spot-checking, but until exact-source attachments are completed they should not be treated as fully verified public benchmark rows.

2 tracked modelsLocal tracked rowsAwaiting exact-source attachmentsDisplay only

Tracked score on TruthfulQA — April 8, 2026

BenchLM mirrors the published tracked score view for TruthfulQA. Phi-4 leads the public snapshot at 77.5% , followed by Kimi K2.5 (57.3%). BenchLM does not use these results to rank models overall.

2 modelsKnowledgeStaleDisplay onlyUpdated April 8, 2026

About TruthfulQA

Year

2021

Tasks

Truthfulness and misconception resistance

Format

Question answering

Difficulty

Hallucination and factuality stress test

TruthfulQA matters because many models sound confident while repeating popular but false answers. It is a useful factuality and hallucination-adjacent benchmark even though it is older than newer factuality suites.

BenchLM freshness & provenance

Version

TruthfulQA 2021

Refresh cadence

Static

Staleness state

Stale

Question availability

Public benchmark set

StaleDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Tracked score table (2 models)

1
Phi-4phi-4
77.5%
2
Kimi K2.5kimi-k2-5
57.3%

FAQ

What does TruthfulQA measure?

A benchmark designed to measure whether language models produce truthful answers instead of repeating common misconceptions or misleading falsehoods.

Which model leads the published TruthfulQA snapshot?

Phi-4 currently leads the published TruthfulQA snapshot with a tracked score of 77.5%. BenchLM shows this benchmark for display only and does not use it in overall rankings.

How many models are evaluated on TruthfulQA?

2 AI models are included in BenchLM's mirrored TruthfulQA snapshot, based on the public leaderboard captured on April 8, 2026.

Last updated: April 8, 2026 · mirrored from the public benchmark leaderboard

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.