Mistral 8x7B v0.2 Benchmark Scores & Performance

Benchmark analysis of Mistral 8x7B v0.2 by Mistral across 14 tests.

Creator

Mistral

Source Type

Open Weight

Reasoning

Non-Reasoning

Context Window

32K

Overall Score

20#88 of 88

Knowledge Benchmarks

MMLU
29
GPQA
28
SuperGPQA
26
OpenBookQA
24

Coding Benchmarks

HumanEval
21

Mathematics Benchmarks

AIME 2023
29
AIME 2024
31
AIME 2025
30
HMMT Feb 2023
25
HMMT Feb 2024
27
HMMT Feb 2025
26
BRUMO 2025
28

Reasoning Benchmarks

SimpleQA
27
MuSR
25

Frequently Asked Questions

How does Mistral 8x7B v0.2 perform overall in AI benchmarks?

Mistral 8x7B v0.2 ranks #88 out of 88 models with an overall score of 20. It is created by Mistral and features a 32K context window.

Is Mistral 8x7B v0.2 good for knowledge and understanding?

Mistral 8x7B v0.2 ranks #88 out of 88 models in knowledge and understanding benchmarks with an average score of 26.8. There are stronger options in this category.

Is Mistral 8x7B v0.2 good for coding and programming?

Mistral 8x7B v0.2 ranks #88 out of 88 models in coding and programming benchmarks with an average score of 21. There are stronger options in this category.

Is Mistral 8x7B v0.2 good for mathematics?

Mistral 8x7B v0.2 ranks #88 out of 88 models in mathematics benchmarks with an average score of 28. There are stronger options in this category.

Is Mistral 8x7B v0.2 good for reasoning and logic?

Mistral 8x7B v0.2 ranks #88 out of 88 models in reasoning and logic benchmarks with an average score of 26. There are stronger options in this category.

Is Mistral 8x7B v0.2 open source?

Yes, Mistral 8x7B v0.2 is an open weight model created by Mistral, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

What is the context window size of Mistral 8x7B v0.2?

Mistral 8x7B v0.2 has a context window of 32K tokens, which determines how much text it can process in a single interaction.