Mistral 8x7B Benchmark Scores & Performance

Benchmark analysis of Mistral 8x7B by Mistral across 14 tests.

Creator

Mistral

Source Type

Open Weight

Reasoning

Non-Reasoning

Context Window

32K

Overall Score

52#56 of 88

Knowledge Benchmarks

MMLU
65
GPQA
64
SuperGPQA
62
OpenBookQA
60

Coding Benchmarks

HumanEval
55

Mathematics Benchmarks

AIME 2023
65
AIME 2024
67
AIME 2025
66
HMMT Feb 2023
61
HMMT Feb 2024
63
HMMT Feb 2025
62
BRUMO 2025
64

Reasoning Benchmarks

SimpleQA
63
MuSR
61

Frequently Asked Questions

How does Mistral 8x7B perform overall in AI benchmarks?

Mistral 8x7B ranks #56 out of 88 models with an overall score of 52. It is created by Mistral and features a 32K context window.

Is Mistral 8x7B good for knowledge and understanding?

Mistral 8x7B ranks #52 out of 88 models in knowledge and understanding benchmarks with an average score of 62.8. There are stronger options in this category.

Is Mistral 8x7B good for coding and programming?

Mistral 8x7B ranks #55 out of 88 models in coding and programming benchmarks with an average score of 55. There are stronger options in this category.

Is Mistral 8x7B good for mathematics?

Mistral 8x7B ranks #51 out of 88 models in mathematics benchmarks with an average score of 64. There are stronger options in this category.

Is Mistral 8x7B good for reasoning and logic?

Mistral 8x7B ranks #50 out of 88 models in reasoning and logic benchmarks with an average score of 62. There are stronger options in this category.

Is Mistral 8x7B open source?

Yes, Mistral 8x7B is an open weight model created by Mistral, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

What is the context window size of Mistral 8x7B?

Mistral 8x7B has a context window of 32K tokens, which determines how much text it can process in a single interaction.