mistralai/Mistral-Small-3.1-24B-Instruct-2503¶
Model Information¶
mistralai/Mistral-Small-3.1-24B-Instruct-2503
is an instruction-finetuned version of Mistral-Small-3.1-24B-Base-2503
.
Building upon Mistral Small 3 (2501), this release introduces state-of-the-art vision understanding and expands long-context capabilities up to 128k tokens, all without compromising performance in standard language tasks.
With 24 billion parameters, this model delivers strong performance across text, code, math, and vision-based tasks.
- Model Developer: Mistral AI
- Model Release Date: March 17, 2025
- Supported Languages: English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, Farsi
Model Architecture¶
mistralai/Mistral-Small-3.1-24B-Instruct-2503
is optimized for both local deployment and enterprise use. It is highly knowledge-dense and can run efficiently on:
- A single RTX 4090
- A 32GB RAM MacBook (when quantized)
Ideal Use Cases:¶
- ⚡ Fast-response conversational agents
- 🔁 Low-latency function calling
- 🧠 Subject matter experts (via fine-tuning)
- 🔐 Local inference for privacy-sensitive orgs
- 🧮 Programming and mathematical reasoning
- 📚 Long document understanding (up to 128k tokens)
- 👁️ Visual understanding and perception tasks
Mistral AI also plans to release commercial variants with support for custom context lengths, modalities, and domains.
Benchmark Scores¶
Model | MMLU | MMLU Pro | MATH | GPQA Main | GPQA Diamond | MBPP | HumanEval | SimpleQA |
---|---|---|---|---|---|---|---|---|
Small 3.1 24B Instruct |
80.62% | 66.76% | 69.30% | 44.42% | 45.96% | 74.71% | 88.41% | 10.43% |