Llama 3 70B Benchmark Scores & Performance

Benchmark analysis of Llama 3 70B by Meta across 14 tests.

Creator

Meta

Source Type

Open Weight

Reasoning

Non-Reasoning

Context Window

128K

Overall Score

48#60 of 88

Knowledge Benchmarks

MMLU
58
GPQA
58
SuperGPQA
56
OpenBookQA
54

Coding Benchmarks

HumanEval
50

Mathematics Benchmarks

AIME 2023
58
AIME 2024
60
AIME 2025
59
HMMT Feb 2023
54
HMMT Feb 2024
56
HMMT Feb 2025
55
BRUMO 2025
57

Reasoning Benchmarks

SimpleQA
56
MuSR
54

Frequently Asked Questions

How does Llama 3 70B perform overall in AI benchmarks?

Llama 3 70B ranks #60 out of 88 models with an overall score of 48. It is created by Meta and features a 128K context window.

Is Llama 3 70B good for knowledge and understanding?

Llama 3 70B ranks #60 out of 88 models in knowledge and understanding benchmarks with an average score of 56.5. There are stronger options in this category.

Is Llama 3 70B good for coding and programming?

Llama 3 70B ranks #60 out of 88 models in coding and programming benchmarks with an average score of 50. There are stronger options in this category.

Is Llama 3 70B good for mathematics?

Llama 3 70B ranks #60 out of 88 models in mathematics benchmarks with an average score of 57. There are stronger options in this category.

Is Llama 3 70B good for reasoning and logic?

Llama 3 70B ranks #60 out of 88 models in reasoning and logic benchmarks with an average score of 55. There are stronger options in this category.

Is Llama 3 70B open source?

Yes, Llama 3 70B is an open weight model created by Meta, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

What is the context window size of Llama 3 70B?

Llama 3 70B has a context window of 128K tokens, which determines how much text it can process in a single interaction.