MiMo-V2.5
BenchLM is tracking MiMo-V2.5, but this profile is currently excluded from the public leaderboard because it still lacks enough non-generated benchmark coverage to rank safely. Only non-generated public benchmark rows appear below.
MiMo-V2.5 is a proprietary model with a 1M token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.
MiMo-V2.5 sits inside the MiMo-V2.5 family alongside MiMo-V2.5-Pro. BenchLM links it directly to MiMo-V2-Omni as the earlier related model in that lineage. This profile currently has 8 of 152 tracked benchmarks. BenchLM only exposes non-generated benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.
Its strongest category is Multimodal & Grounded (#27). This performance profile makes it particularly strong for screenshots, documents, charts, and grounded multimodal workflows.
Ranking Distribution
Category rank across 3 benchmark categories — sorted by best rank
Category Performance
Scores across all benchmark categories (0-100 scale)
Category Breakdown
Agentic
Coding
Reasoning
Knowledge
Math
Multilingual
Multimodal
#27Inst. Following
Benchmark Details
Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.
Compare This Model
See how MiMo-V2.5 stacks up against similar models
Frequently Asked Questions
How does MiMo-V2.5 perform overall in AI benchmarks?
MiMo-V2.5 has 8 published benchmark scores on BenchLM, but it does not yet have enough non-generated coverage to receive a global overall rank.
Is MiMo-V2.5 good for coding and programming?
MiMo-V2.5 has visible benchmark coverage in coding and programming, but BenchLM does not currently assign it a global category rank there.
Is MiMo-V2.5 good for agentic tool use and computer tasks?
MiMo-V2.5 has visible benchmark coverage in agentic tool use and computer tasks, but BenchLM does not currently assign it a global category rank there.
Is MiMo-V2.5 good for multimodal and grounded tasks?
MiMo-V2.5 ranks #27 out of 111 models in multimodal and grounded tasks benchmarks with an average score of 72.2. There are stronger options in this category.
Which sibling models are related to MiMo-V2.5?
MiMo-V2.5 belongs to the MiMo-V2.5 family. Related variants on BenchLM include MiMo-V2.5-Pro.
Does MiMo-V2.5 have full benchmark coverage on BenchLM?
Not yet. MiMo-V2.5 currently has 8 published benchmark scores out of the 152 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.
What is the context window size of MiMo-V2.5?
MiMo-V2.5 has a context window of 1M, which determines how much text it can process in a single interaction.
Related Resources
Don't miss the next GPT moment
Which models moved up, what’s new, and what it costs. One email a week, 3-min read.
Free. One email per week.