Benchmark analysis of Llama 3 70B by Meta across 14 tests.
Creator
Meta
Source Type
Open WeightReasoning
Non-ReasoningContext Window
128K
Overall Score
Llama 3 70B ranks #60 out of 88 models with an overall score of 48. It is created by Meta and features a 128K context window.
Llama 3 70B ranks #60 out of 88 models in knowledge and understanding benchmarks with an average score of 56.5. There are stronger options in this category.
Llama 3 70B ranks #60 out of 88 models in coding and programming benchmarks with an average score of 50. There are stronger options in this category.
Llama 3 70B ranks #60 out of 88 models in mathematics benchmarks with an average score of 57. There are stronger options in this category.
Llama 3 70B ranks #60 out of 88 models in reasoning and logic benchmarks with an average score of 55. There are stronger options in this category.
Yes, Llama 3 70B is an open weight model created by Meta, meaning it can be downloaded and run locally or fine-tuned for specific use cases.
Llama 3 70B has a context window of 128K tokens, which determines how much text it can process in a single interaction.