mgoin commited on
Commit
491ee86
1 Parent(s): a0375a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -4
README.md CHANGED
@@ -1,14 +1,39 @@
1
  ---
 
 
2
  tags:
3
  - deepsparse
4
  ---
 
5
 
6
- # Sparse MPT-7B-GSM8k - DeepSparse
 
7
 
8
- Sparse finetuned MPT 7b model on GSM8k, pruned to 50% and quantized for inference with DeepSparse
 
 
 
 
9
 
10
  ```python
11
  from deepsparse import TextGeneration
12
- model = TextGeneration(model="hf:neuralmagic/mpt-7b-gsm8k-pruned50-quant")
 
13
  model("There are twice as many boys as girls at Dr. Wertz's school. If there are 60 girls and 5 students to every teacher, how many teachers are there?", max_new_tokens=50)
14
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ datasets:
3
+ - gsm8k
4
  tags:
5
  - deepsparse
6
  ---
7
+ # mpt-7b-gsm8k-pruned50-quant
8
 
9
+ **Paper**: [https://arxiv.org/pdf/xxxxxxx.pdf](https://arxiv.org/pdf/xxxxxxx.pdf)
10
+ **Code**: https://github.com/neuralmagic/deepsparse/tree/main/research/mpt
11
 
12
+ This model was produced from a [MPT-7B base model](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pt) finetuned on the GSM8k dataset with pruning applied using [SparseGPT](https://arxiv.org/abs/2301.00774) and retrain for 2 epochs with L2 distillation. Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
13
+
14
+ GSM8k zero-shot accuracy with [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness) : 30.71% (FP32 baseline is 28.2%)
15
+
16
+ ### Usage
17
 
18
  ```python
19
  from deepsparse import TextGeneration
20
+ model_path = "hf:neuralmagic/mpt-7b-gsm8k-pruned50-quant" # or use a sparsezoo stub (zoo:mpt-7b-gsm8k_mpt_pretrain-pruned50_quantized)
21
+ model = TextGeneration(model=model_path)
22
  model("There are twice as many boys as girls at Dr. Wertz's school. If there are 60 girls and 5 students to every teacher, how many teachers are there?", max_new_tokens=50)
23
+ ```
24
+
25
+ All MPT model weights are available on [SparseZoo](https://sparsezoo.neuralmagic.com/?datasets=gsm8k&ungrouped=true) and CPU speedup for generative inference can be reproduced by following the instructions at [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt)
26
+
27
+
28
+ | Model Links | Compression |
29
+ | --------------------------------------------------------------------------------------------------------- | --------------------------------- |
30
+ | [neuralmagic/mpt-7b-gsm8k-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-quant) | Quantization (W8A8) |
31
+ | [neuralmagic/mpt-7b-gsm8k-pruned40-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned40-quant) | Quantization (W8A8) & 40% Pruning |
32
+ | [neuralmagic/mpt-7b-gsm8k-pruned50-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned50-quant) | Quantization (W8A8) & 50% Pruning |
33
+ | [neuralmagic/mpt-7b-gsm8k-pruned60-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned60-quant) | Quantization (W8A8) & 60% Pruning |
34
+ | [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned70-quant) | Quantization (W8A8) & 70% Pruning |
35
+ | [neuralmagic/mpt-7b-gsm8k-pruned70-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned75-quant) | Quantization (W8A8) & 75% Pruning |
36
+ | [neuralmagic/mpt-7b-gsm8k-pruned80-quant](https://huggingface.co/neuralmagic/mpt-7b-gsm8k-pruned80-quant) | Quantization (W8A8) & 80% Pruning |
37
+
38
+
39
+ For general questions on these models and sparsification methods, reach out to the engineering team on our [community Slack](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ).