meta-llama/Llama-3.1-70B-Instruct¶
Model Information¶
meta-llama/Llama-3.1-70B-Instruct
is part of Metaβs LLaMA 3.1 family of multilingual large language models (LLMs). These models are available in 8B, 70B, and 405B sizes and come in both pretrained and instruction-tuned variants. The instruction-tuned models are optimized for multilingual dialogue tasks and achieve strong performance across open-source and commercial benchmarks.
- Model Developer: Meta
- Model Release Date: July 23, 2024
- Supported Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, Thai
Model Architecture¶
meta-llama/Llama-3.1-70B-Instruct
is an auto-regressive language model based on an enhanced transformer architecture.
The instruction-tuned versions leverage:
- Supervised Fine-Tuning (SFT)
- Reinforcement Learning with Human Feedback (RLHF)
These techniques align the model with human preferences around helpfulness, relevance, and safety.
Benchmark Scores¶
Category | Benchmark | Shots | Metric | LLaMA 3.1 70B Instruct |
---|---|---|---|---|
General | MMLU (CoT) | 0 | Acc. (avg) | 86.0 |
MMLU Pro (CoT) | 5 | Acc. (avg) | 66.4 | |
Steerability | IFEval | β | β | 87.5 |
Reasoning | GPQA Diamond (CoT) | 0 | Accuracy | 48.0 |
Code | HumanEval | 0 | Pass@1 | 80.5 |
MBPP EvalPlus (base) | 0 | Pass@1 | 86.0 | |
Math | MATH (CoT) | 0 | Sympy Score | 68.0 |
Tool Use | BFCL v2 | 0 | AST Macro Avg. | 77.5 |
Multilingual | MGSM | 0 | EM (exact match) | 86.9 |