Mistral 7B v0.3 Benchmark Scores & Performance

Benchmark analysis of Mistral 7B v0.3 by Mistral across 14 tests.

Creator

Mistral

Source Type

Open Weight

Reasoning

Non-Reasoning

Context Window

32K

Overall Score

21#87 of 88

Knowledge Benchmarks

MMLU
30
GPQA
29
SuperGPQA
27
OpenBookQA
25

Coding Benchmarks

HumanEval
22

Mathematics Benchmarks

AIME 2023
30
AIME 2024
32
AIME 2025
31
HMMT Feb 2023
26
HMMT Feb 2024
28
HMMT Feb 2025
27
BRUMO 2025
29

Reasoning Benchmarks

SimpleQA
28
MuSR
26

Frequently Asked Questions

How does Mistral 7B v0.3 perform overall in AI benchmarks?

Mistral 7B v0.3 ranks #87 out of 88 models with an overall score of 21. It is created by Mistral and features a 32K context window.

Is Mistral 7B v0.3 good for knowledge and understanding?

Mistral 7B v0.3 ranks #87 out of 88 models in knowledge and understanding benchmarks with an average score of 27.8. There are stronger options in this category.

Is Mistral 7B v0.3 good for coding and programming?

Mistral 7B v0.3 ranks #87 out of 88 models in coding and programming benchmarks with an average score of 22. There are stronger options in this category.

Is Mistral 7B v0.3 good for mathematics?

Mistral 7B v0.3 ranks #87 out of 88 models in mathematics benchmarks with an average score of 29. There are stronger options in this category.

Is Mistral 7B v0.3 good for reasoning and logic?

Mistral 7B v0.3 ranks #87 out of 88 models in reasoning and logic benchmarks with an average score of 27. There are stronger options in this category.

Is Mistral 7B v0.3 open source?

Yes, Mistral 7B v0.3 is an open weight model created by Mistral, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

What is the context window size of Mistral 7B v0.3?

Mistral 7B v0.3 has a context window of 32K tokens, which determines how much text it can process in a single interaction.