MiMo-V2-Omni vs Qwen2.5-1M

Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.

Benchmark data for one or both models is coming soon. This page currently shows metadata and pricing where BenchLM has it, and score-level comparisons will populate as public benchmark results land.

BenchLM does not have sourced benchmark coverage for MiMo-V2-Omni yet. This comparison is currently limited to metadata such as context window, reasoning mode, and pricing where available.

Qwen2.5-1M has the larger context window at 1M, compared with 262K for MiMo-V2-Omni.

Quick Verdict

Benchmark data for MiMo-V2-Omni and Qwen2.5-1M is coming soon on BenchLM.

Agentic

Coming soon

Comparable scores for this category are coming soon. One or both models do not have sourced results here yet.

Coming soon
Terminal-Bench 2.0
65%
Coming soon
BrowseComp
72%
Coming soon
OSWorld-Verified
59%

Coding

Coming soon

Comparable scores for this category are coming soon. One or both models do not have sourced results here yet.

Coming soon
HumanEval
76%
Coming soon
SWE-bench Verified
47%
Coming soon
LiveCodeBench
40%
Coming soon
SWE-bench Pro
49%

Multimodal & Grounded

Coming soon

Comparable scores for this category are coming soon. One or both models do not have sourced results here yet.

Coming soon
MMMU-Pro
63%
Coming soon
OfficeQA Pro
75%

Reasoning

Coming soon

Comparable scores for this category are coming soon. One or both models do not have sourced results here yet.

Coming soon
MuSR
79%
Coming soon
BBH
82%
Coming soon
LongBench v2
82%
Coming soon
MRCRv2
81%

Knowledge

Coming soon

Comparable scores for this category are coming soon. One or both models do not have sourced results here yet.

Coming soon
MMLU
84%
Coming soon
GPQA
83%
Coming soon
SuperGPQA
81%
Coming soon
MMLU-Pro
74%
Coming soon
HLE
10%
Coming soon
FrontierScience
74%
Coming soon
SimpleQA
81%

Instruction Following

Coming soon

Comparable scores for this category are coming soon. One or both models do not have sourced results here yet.

Coming soon
IFEval
84%

Multilingual

Coming soon

Comparable scores for this category are coming soon. One or both models do not have sourced results here yet.

Coming soon
MGSM
81%
Coming soon
MMLU-ProX
80%

Mathematics

Coming soon

Comparable scores for this category are coming soon. One or both models do not have sourced results here yet.

Coming soon
AIME 2023
85%
Coming soon
AIME 2024
87%
Coming soon
AIME 2025
86%
Coming soon
HMMT Feb 2023
81%
Coming soon
HMMT Feb 2024
83%
Coming soon
HMMT Feb 2025
82%
Coming soon
BRUMO 2025
84%
Coming soon
MATH-500
83%

Frequently Asked Questions

Can I compare MiMo-V2-Omni and Qwen2.5-1M on BenchLM yet?

Not fully yet. BenchLM is tracking both models, but the sourced benchmark breakdown for this comparison is still coming soon.

Why does this comparison show “coming soon”?

BenchLM only shows category winners and benchmark-level calls when we have sourced results that can be compared fairly. For these models, the public benchmark coverage is not complete enough yet.

What data is available for MiMo-V2-Omni and Qwen2.5-1M today?

Qwen2.5-1M: $0.00 input / $0.00 output per 1M tokens Both model pages still include creator, context window, reasoning mode, and other metadata while benchmark coverage fills in.

Last updated: March 18, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.