Skip to main content

Phi-4

MicrosoftEstablishedReleased Jan 1, 2025
Overall Score
Coming soon
Arena Elo
1256
Categories Ranked
4of 8
Price (1M tokens)
$0 in / $0 out
Speed
35tok/s
Context
16K
Open WeightSelf-hostNon-Reasoning
Confidence
base

BenchLM is tracking Phi-4, but sourced benchmark results are not published on the site yet. This page currently shows the model metadata we can verify now, and score-level benchmark coverage will appear once public evaluations land.

Phi-4 is a open weight model with a 16K token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.

This profile currently has 0 sourced benchmarks on BenchLM, so the benchmark sections below are intentionally marked as coming soon.

Its strongest category is Multilingual (#79), while its weakest is Agentic (#93). This performance profile makes it a well-rounded choice across a range of tasks.

Ranking Distribution

Category rank across 6 benchmark categories — sorted by best rank

Category Performance

Scores across all benchmark categories (0-100 scale)

Category Breakdown

Agentic

#93
12.9/ 100
Weight: 22%0 benchmarks
Terminal-Bench 2.0BrowseCompOSWorld-VerifiedGAIATAU-benchWebArena

Coding

48.5/ 100
Weight: 20%0 benchmarks
SWE-bench VerifiedLiveCodeBenchSWE-bench ProSWE-RebenchSciCode

Reasoning

#91
0.0/ 100
Weight: 17%0 benchmarks
MuSRLongBench v2MRCRv2ARC-AGI-2

Knowledge

33.1/ 100
Weight: 12%0 benchmarks
GPQASuperGPQAMMLU-ProHLEFrontierScienceSimpleQA

Math

78.2/ 100
Weight: 5%0 benchmarks
AIME 2025BRUMO 2025MATH-500FrontierMath

Multilingual

#79
25.8/ 100
Weight: 7%0 benchmarks
MGSMMMLU-ProX

Multimodal

#87
18.6/ 100
Weight: 12%0 benchmarks
MMMU-ProOfficeQA ProCharXivCharXiv w/o tools

Inst. Following

0.0/ 100
Weight: 5%0 benchmarks
IFEvalIFBench

Chatbot Arena Performance

Text Overall1256CI: ±4.624,126 votes
Coding1306CI: ±9.93,305 votes
Math1264CI: ±10.42,764 votes
Instruction Following1244CI: ±6.69,162 votes
Creative Writing1210CI: ±9.64,062 votes
Multi-turn1242CI: ±10.23,517 votes
Hard Prompts1277CI: ±7.85,747 votes
Hard Prompts (English)1289CI: ±9.33,804 votes
Longer Query1265CI: ±10.82,896 votes

Benchmark Details

Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.

Frequently Asked Questions

How does Phi-4 perform overall in AI benchmarks?

BenchLM is tracking Phi-4, but sourced benchmark coverage is still coming soon. We currently list its creator, model type, and context window while we wait for public benchmark results.

Is Phi-4 open source?

Yes, Phi-4 is an open weight model created by Microsoft, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

Does Phi-4 have full benchmark coverage on BenchLM?

Not yet. Phi-4 currently has 0 published benchmark scores out of the 178 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.

What is the context window size of Phi-4?

Phi-4 has a context window of 16K, which determines how much text it can process in a single interaction.

Last updated: April 24, 2026 · Runtime metrics stay blank until BenchLM has a sourced snapshot.

Don't miss the next GPT moment

Which models moved up, what’s new, and what it costs. One email a week, 3-min read.

Free. One email per week.