meta-llama/Llama-3.3-70b-Instruct¶
Model Information¶
meta-llama/Llama-3.3-70b-Instruct
is part of Meta's LLaMA 3.3 collection — a multilingual large language model (LLM) available in 70B size. This instruction-tuned, text-only model is optimized for multilingual dialogue use cases and outperforms many existing open-source and commercial models across common industry benchmarks.
- Model Developer: Meta
- Model Release Date: December 6, 2024
- Supported Languages: English, German, French, Italian, Portuguese, Hindi, Spanish, Thai
Model Architecture¶
meta-llama/Llama-3.3-70b-Instruct
is an auto-regressive language model built on an enhanced transformer architecture.
It uses:
- Supervised Fine-Tuning (SFT)
- Reinforcement Learning with Human Feedback (RLHF)
These help align the model's behavior with human preferences for usefulness, accuracy, and safety.
Benchmark Scores¶
Category | Benchmark | Shots | Metric | LLaMA 3.3 70B Instruct |
---|---|---|---|---|
General | MMLU (CoT) | 0 | Acc. (avg) | 86.0 |
MMLU Pro (CoT) | 5 | Acc. (avg) | 68.9 | |
IFEval | – | – | 92.1 | |
Reasoning | GPQA Diamond (CoT) | 0 | Accuracy | 50.5 |
Code | HumanEval | 0 | Pass@1 | 88.4 |
MBPP EvalPlus (base) | 0 | Pass@1 | 87.6 | |
Math | MATH (CoT) | 0 | Sympy Score | 77.0 |
Tool Use | BFCL v2 | 0 | AST Macro Avg. | 77.3 |
Multilingual | MGSM | 0 | EM (exact match) | 91.1 |