A lightweight machine-learning competition benchmark that measures whether models can iteratively train, evaluate, and improve ML systems in low-resource settings.
As of March 2026, MiniMax M2.7 leads the MLE-Bench Lite leaderboard with 66.6%.
Year
2026
Tasks
Low-resource ML competitions
Format
Autonomous iterative ML optimization
Difficulty
Agentic machine learning
MiniMax reports MLE-Bench Lite results from autonomous multi-round optimization on low-resource machine-learning competitions, making it a useful signal for agentic ML workflows.
MiniMax M2.7: Early Echoes of Self-EvolutionA lightweight machine-learning competition benchmark that measures whether models can iteratively train, evaluate, and improve ML systems in low-resource settings.
MiniMax M2.7 by MiniMax currently leads with a score of 66.6% on MLE-Bench Lite.
1 AI models have been evaluated on MLE-Bench Lite on BenchLM.
Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.
Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.