Text Generation
Transformers
ONNX
llama
sparse
instruct
deepsparse
mgoin commited on
Commit
297c316
1 Parent(s): 0cb0208

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -18,6 +18,8 @@ tags:
18
  This repo contains a [50% sparse Llama 2 7B](https://huggingface.co/neuralmagic/Llama-2-7b-pruned50-retrained) finetuned for instruction-following tasks using a blend of the Platypus + Open Orca + Dolphin datasets.
19
  It was then quantized to 8-bit weights + activations and exported to deploy with [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
20
 
 
 
21
  **Authors**: Neural Magic, Cerebras
22
 
23
  ## Usage
 
18
  This repo contains a [50% sparse Llama 2 7B](https://huggingface.co/neuralmagic/Llama-2-7b-pruned50-retrained) finetuned for instruction-following tasks using a blend of the Platypus + Open Orca + Dolphin datasets.
19
  It was then quantized to 8-bit weights + activations and exported to deploy with [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
20
 
21
+ Official model weights from [Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment](https://arxiv.org/abs/2405.03594).
22
+
23
  **Authors**: Neural Magic, Cerebras
24
 
25
  ## Usage