--- tags: - vllm - sparsity pipeline_tag: text-generation license: llama3.1 base_model: meta-llama/Llama-3.1-8B --- # Sparse-Llama-3.1-8B-2of4 ## Model Overview - **Model Architecture:** Llama-3.1-8B - **Input:** Text - **Output:** Text - **Model Optimizations:** - **Sparsity:** 2:4 - **Release Date:** 11/20/2024 - **Version:** 1.0 - **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE) - **Model Developers:** Neural Magic This is the 2:4 sparse version of [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). On the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), it achieves an average score of 62.16, compared to 63.19 for the dense model—demonstrating a **98.37% accuracy recovery**. On the [Mosaic Eval Gauntlet](https://github.com/mosaicml/llm-foundry/blob/main/scripts/eval/local_data/EVAL_GAUNTLET.md) benchmark (version v0.3), it achieves an average score of 53.85, versus 55.34 for the dense model—representing a **97.3% accuracy recovery**. ### Model Optimizations This model was obtained by pruning all linear operators within transformer blocks to the 2:4 sparsity pattern: in each group of four weights, two are retained while two are pruned. In addition to pruning, the sparse model was trained with knowledge distillation for 13B tokens to recover the accuracy loss incurred by pruning. For pruning, we utilize optimized version of [SparseGPT](https://arxiv.org/abs/2301.00774) through [LLM-Compressor](https://github.com/vllm-project/llm-compressor), and for sparse training with knowledge distillation we utilize [SquareHead approach](https://arxiv.org/abs/2310.06927). ## Deployment with vLLM This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend. vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details. ## Evaluation This model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1) with the [vLLM](https://docs.vllm.ai/en/stable/) engine for faster inference. In addition to the OpenLLM benchmark, the model was evaluated on the [Mosaic Eval Gauntlet](https://github.com/mosaicml/llm-foundry/blob/main/scripts/eval/local_data/EVAL_GAUNTLET.md) benchmark (version v0.3). The evaluation results are summarized below. ### Accuracy #### Open LLM Leaderboard evaluation scores
Benchmark Llama-3.1-8B Sparse-Llama-3.1-8B-2of4
ARC-C (25-shot) 58.2 59.4
MMLU (5-shot) 65.4 60.6
HellaSwag (10-shot) 82.3 79.8
WinoGrande (5-shot) 78.3 75.9
GSM8K (5-shot) 50.7 56.3
TruthfulQA (0-shot) 44.2 40.9
Average Score 63.19 62.16
Accuracy Recovery (%) 100 98.37
#### Mosaic Eval Gauntlet evaluation scores
Benchmark Llama-3.1-8B Sparse-Llama-3.1-8B-2of4
World Knowledge 59.4 55.6
Commonsense Reasoning 49.3 50.0
Language Understanding 69.8 69.0
Symbolic Problem Solving 40.0 37.1
Reading Comprehension 58.2 57.5
Average Score 55.34 53.85
Accuracy Recovery (%) 100 97.3