File size: 1,083 Bytes
0e06b35 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
A set of 50% weight-sparse Llama3.1-8B pruned by [Wanda](https://github.com/locuslab/wanda).
Model links are in the table below. Models can be loaded as is with Huggingface Transformers.
### Perplexity
![Perplexity over Sparsity](llama3.1-8B_Wanda_sparsity.png)
### MMLU (5-shot)
| MMLU (5-shot) | Accuracy (%) | Relative to Dense (%) | Model Link|
|----------------|--------------|-----------------------|-----------|
| Dense | 65.1 | baseline | [Meta-Llama-3.1-8B-wanda-unstructured-0.0](https://huggingface.co/vuiseng9/Meta-Llama-3.1-8B-wanda-unstructured-0.0) |
| Unstructured | 50.0 | -15.1 | [Meta-Llama-3.1-8B-wanda-unstructured-0.5](https://huggingface.co/vuiseng9/Meta-Llama-3.1-8B-wanda-unstructured-0.5) |
| 4:8 | 39.3 | -25.8 | [Meta-Llama-3.1-8B-wanda-4of8](https://huggingface.co/vuiseng9/Meta-Llama-3.1-8B-wanda-4of8) |
| 2:4 | 28.7 | -36.4 | [Meta-Llama-3.1-8B-wanda-2of4](https://huggingface.co/vuiseng9/Meta-Llama-3.1-8B-wanda-2of4) |
|