Qwen2.5-1M
BenchLM is tracking Qwen2.5-1M, but sourced benchmark results are not published on the site yet. This page currently shows the model metadata we can verify now, and score-level benchmark coverage will appear once public evaluations land.
Qwen2.5-1M is a open weight model with a 1M token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.
This profile currently has 0 sourced benchmarks on BenchLM, so the benchmark sections below are intentionally marked as coming soon.
Its strongest category is Reasoning (#23), while its weakest is Multimodal & Grounded (#61). This performance profile makes it particularly strong for complex reasoning, multi-step problem solving, and analytical tasks.
Ranking Distribution
Category rank across 8 benchmark categories — sorted by best rank
Category Performance
Scores across all benchmark categories (0-100 scale)
Category Breakdown
Agentic
#40Coding
#48Reasoning
#23Knowledge
#44Math
#27Multilingual
#44Multimodal
#61Inst. Following
#55Chatbot Arena Performance
Benchmark Details
Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.
Compare This Model
See how Qwen2.5-1M stacks up against similar models
Frequently Asked Questions
How does Qwen2.5-1M perform overall in AI benchmarks?
BenchLM is tracking Qwen2.5-1M, but sourced benchmark coverage is still coming soon. We currently list its creator, model type, and context window while we wait for public benchmark results.
Is Qwen2.5-1M open source?
Yes, Qwen2.5-1M is an open weight model created by Alibaba, meaning it can be downloaded and run locally or fine-tuned for specific use cases.
Does Qwen2.5-1M have full benchmark coverage on BenchLM?
Not yet. Qwen2.5-1M currently has 0 published benchmark scores out of the 152 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.
What is the context window size of Qwen2.5-1M?
Qwen2.5-1M has a context window of 1M, which determines how much text it can process in a single interaction.
Related Resources
Don't miss the next GPT moment
Which models moved up, what’s new, and what it costs. One email a week, 3-min read.
Free. One email per week.