mgoin commited on
Commit
cba3216
1 Parent(s): b709758

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -6,7 +6,7 @@ tags:
6
  ---
7
  # mpt-7b-gsm8k-pruned70-quant
8
 
9
- **Paper**: [https://arxiv.org/pdf/xxxxxxx.pdf](https://arxiv.org/pdf/xxxxxxx.pdf)
10
  **Code**: https://github.com/neuralmagic/deepsparse/tree/main/research/mpt
11
 
12
  This model was produced from a [MPT-7B base model](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pt) finetuned on the GSM8k dataset with pruning applied using [SparseGPT](https://arxiv.org/abs/2301.00774) and retrain for 4 epochs with L2 distillation. Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
 
6
  ---
7
  # mpt-7b-gsm8k-pruned70-quant
8
 
9
+ **Paper**: [Sparse Finetuning for Inference Acceleration of Large Language Models](https://arxiv.org/abs/2310.06927)
10
  **Code**: https://github.com/neuralmagic/deepsparse/tree/main/research/mpt
11
 
12
  This model was produced from a [MPT-7B base model](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pt) finetuned on the GSM8k dataset with pruning applied using [SparseGPT](https://arxiv.org/abs/2301.00774) and retrain for 4 epochs with L2 distillation. Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).