Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ tags:
|
|
6 |
---
|
7 |
# mpt-7b-gsm8k-pruned40-quant
|
8 |
|
9 |
-
**Paper**: [Sparse Finetuning for Inference Acceleration of Large Language Models](https://arxiv.org/abs/2310.06927)
|
10 |
**Code**: https://github.com/neuralmagic/deepsparse/tree/main/research/mpt
|
11 |
|
12 |
This model was produced from a [MPT-7B base model](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pt) finetuned on the GSM8k dataset with pruning applied using [SparseGPT](https://arxiv.org/abs/2301.00774) and retrain for 2 epochs with L2 distillation. Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
|
|
|
6 |
---
|
7 |
# mpt-7b-gsm8k-pruned40-quant
|
8 |
|
9 |
+
**Paper**: [Sparse Finetuning for Inference Acceleration of Large Language Models](https://arxiv.org/abs/2310.06927)
|
10 |
**Code**: https://github.com/neuralmagic/deepsparse/tree/main/research/mpt
|
11 |
|
12 |
This model was produced from a [MPT-7B base model](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pt) finetuned on the GSM8k dataset with pruning applied using [SparseGPT](https://arxiv.org/abs/2301.00774) and retrain for 2 epochs with L2 distillation. Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
|