Skip to main content

MM-ClawBench

An OpenClaw-derived agent benchmark covering practical work and life tasks such as office document delivery, research, planning, and code maintenance.

Benchmark score on MM-ClawBench — May 1, 2026

BenchLM mirrors the published score view for MM-ClawBench. MiniMax M2.7 leads the public snapshot at 62.7% , followed by MiMo-V2.5 (23.8%). BenchLM does not use these results to rank models overall.

2 modelsAgenticCurrentDisplay onlyUpdated May 1, 2026

About MM-ClawBench

Year

2026

Tasks

OpenClaw-style real-world tasks

Format

Agent workflow evaluation

Difficulty

Broad real-world agentic execution

MiniMax built MM-ClawBench from commonly used OpenClaw tasks to evaluate how well models handle broad real-world agent scenarios across work and personal productivity.

BenchLM freshness & provenance

Version

MM-ClawBench 2026

Refresh cadence

Quarterly

Staleness state

Current

Question availability

Public benchmark set

CurrentDisplay only

BenchLM uses freshness metadata to decide whether a benchmark should still be treated as a strong differentiator, a benchmark to watch, or a display-only reference. For the full scoring policy, see the BenchLM methodology page.

Benchmark score table (2 models)

1
62.7%
2
23.8%

FAQ

What does MM-ClawBench measure?

An OpenClaw-derived agent benchmark covering practical work and life tasks such as office document delivery, research, planning, and code maintenance.

Which model scores highest on MM-ClawBench?

MiniMax M2.7 by MiniMax currently leads with a score of 62.7% on MM-ClawBench.

How many models are evaluated on MM-ClawBench?

2 AI models have been evaluated on MM-ClawBench on BenchLM.

Compare Top Models on MM-ClawBench

Last updated: May 1, 2026 · BenchLM version MM-ClawBench 2026

The AI models change fast. We track them for you.

For engineers, researchers, and the plain curious — a weekly brief on new models, ranking shifts, and pricing changes.

Free. No spam. Unsubscribe anytime.