Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- gsm8k
|
4 |
+
tags:
|
5 |
+
- deepsparse
|
6 |
+
---
|
7 |
+
# mpt-7b-gsm8k-pruned40-quant
|
8 |
+
|
9 |
+
This model was produced from a MPT-7B base model finetuned on the GSM8k dataset with pruning and quantization applied using [SparseGPT](https://arxiv.org/abs/2301.00774). Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
|
10 |
+
|
11 |
+
GSM8k zero-shot accuracy with [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness) : 30.33%
|
12 |
+
|
13 |
+
### Usage
|
14 |
+
|
15 |
+
```python
|
16 |
+
from deepsparse import TextGeneration
|
17 |
+
model = TextGeneration(model="hf:neuralmagic/mpt-7b-gsm8k-pruned40-quant")
|
18 |
+
model("There are twice as many boys as girls at Dr. Wertz's school. If there are 60 girls and 5 students to every teacher, how many teachers are there?", max_new_tokens=50)
|
19 |
+
```
|
20 |
+
|
21 |
+
All MPT model weights are available on [SparseZoo](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true) and CPU speedup for generative inference can be reproduced by following the instructions at [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt)
|
22 |
+
|
23 |
+
|
24 |
+
| Model Links | Compression |
|
25 |
+
| --------------------------------------------------------------------------------------------------------- | --------------------------------- |
|
26 |
+
| [neuralmagic/mpt-7b-gsm8k-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-quant) | Quantization (W8A8) |
|
27 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned40-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned40-quant) | Quantization (W8A8) & 40% Pruning |
|
28 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned50-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned50-quant) | Quantization (W8A8) & 50% Pruning |
|
29 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned60-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned60-quant) | Quantization (W8A8) & 60% Pruning |
|
30 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned70-quant) | Quantization (W8A8) & 70% Pruning |
|
31 |
+
| [neuralmagic/mpt-7b-gsm8k-pruned80-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned80-quant) | Quantization (W8A8) & 80% Pruning |
|
32 |
+
|
33 |
+
|
34 |
+
For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).
|