BenchLM is tracking MiniMax M2.7 by MiniMax. Some benchmark data is visible, but not enough non-generated coverage is available for a leaderboard rank yet.
BenchLM is tracking MiniMax M2.7, but this profile is currently excluded from the public leaderboard because it still lacks enough trustworthy benchmark coverage to rank safely. Generated rows may still appear below for context, but they are not enough on their own to make this model ranking-eligible.
MiniMax M2.7 is a proprietary model with a 200K token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.
BenchLM links it directly to MiniMax M2.5 as the earlier related model in that lineage. This profile currently has 11 of 51 tracked benchmarks. BenchLM uses discounted fallback estimates where necessary so missing categories do not collapse the overall score, but public sourced rows still carry more weight than inferred ones.
Its strongest category is Agentic (#20), while its weakest is Coding (#22). This performance profile makes it particularly useful for coding agents, browser research, and computer-use workflows.
Creator
MiniMax
Source Type
ProprietaryReasoning
Non-ReasoningContext Window
200K
Overall Score
Coming soon
Family
MiniMax M2.7
Base entry
Related Earlier Model
MiniMax M2.5BenchLM is still missing enough trustworthy benchmark coverage to rank this model across the public leaderboard. Trusted benchmark rows remain visible below for reference.
MiniMax M2.7 ranks #null out of 58 models with an overall score of 57 (estimated). It is created by MiniMax and features a 200K context window.
MiniMax M2.7 ranks #null out of 58 models in knowledge and understanding benchmarks with an average score of 0. It is among the top performers in this category.
MiniMax M2.7 ranks #22 out of 58 models in coding and programming benchmarks with an average score of 56.2. There are stronger options in this category.
MiniMax M2.7 ranks #20 out of 58 models in agentic tool use and computer tasks benchmarks with an average score of 57. There are stronger options in this category.
MiniMax M2.7 ranks #null out of 58 models in multimodal and grounded tasks benchmarks with an average score of 0. It is among the top performers in this category.
Not yet. MiniMax M2.7 currently has 11 sourced benchmark scores out of the 51 benchmarks BenchLM tracks. BenchLM may use discounted fallback values for missing categories, but trustworthy public rows still carry more weight than inferred ones.
MiniMax M2.7 has a context window of 200K, which determines how much text it can process in a single interaction.
New model releases, benchmark scores, and leaderboard changes. Every Friday.
Free. Your signup is stored with a derived country code for compliance routing.