Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,8 @@ tags:
|
|
16 |
|
17 |
This repo contains a [70% sparse Llama 2 7B](https://huggingface.co/neuralmagic/Llama-2-7b-pruned70-retrained) finetuned for instruction-following tasks using a blend of the Platypus + Open Orca + Dolphin datasets.
|
18 |
|
|
|
|
|
19 |
**Authors**: Neural Magic, Cerebras
|
20 |
|
21 |
## Usage
|
|
|
16 |
|
17 |
This repo contains a [70% sparse Llama 2 7B](https://huggingface.co/neuralmagic/Llama-2-7b-pruned70-retrained) finetuned for instruction-following tasks using a blend of the Platypus + Open Orca + Dolphin datasets.
|
18 |
|
19 |
+
Official model weights from [Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment](https://arxiv.org/abs/2405.03594).
|
20 |
+
|
21 |
**Authors**: Neural Magic, Cerebras
|
22 |
|
23 |
## Usage
|