mgoin commited on
Commit
e2acb19
1 Parent(s): 3d4f8aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -8,13 +8,14 @@ tags:
8
 
9
  This model was produced from a MPT-7B base model finetuned on the GSM8k dataset with pruning and quantization applied using [SparseGPT](https://arxiv.org/abs/2301.00774). Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
10
 
11
- GSM8k zero-shot accuracy with [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness) : 30.33%
12
 
13
  ### Usage
14
 
15
  ```python
16
  from deepsparse import TextGeneration
17
- model = TextGeneration(model="hf:neuralmagic/mpt-7b-gsm8k-pruned40-quant")
 
18
  model("There are twice as many boys as girls at Dr. Wertz's school. If there are 60 girls and 5 students to every teacher, how many teachers are there?", max_new_tokens=50)
19
  ```
20
 
 
8
 
9
  This model was produced from a MPT-7B base model finetuned on the GSM8k dataset with pruning and quantization applied using [SparseGPT](https://arxiv.org/abs/2301.00774). Then it was exported for optimized inference with [DeepSparse](https://github.com/neuralmagic/deepsparse/tree/main/research/mpt).
10
 
11
+ GSM8k zero-shot accuracy with [lm-evaluation-harness](https://github.com/neuralmagic/lm-evaluation-harness) : 30.33% (dense fp32 is 28.2%)
12
 
13
  ### Usage
14
 
15
  ```python
16
  from deepsparse import TextGeneration
17
+ model_path = "hf:neuralmagic/mpt-7b-gsm8k-pruned40-quant" # or use a sparsezoo stub (zoo:mpt-7b-gsm8k_mpt_pretrain-pruned40_quantized)
18
+ model = TextGeneration(model=model_path)
19
  model("There are twice as many boys as girls at Dr. Wertz's school. If there are 60 girls and 5 students to every teacher, how many teachers are there?", max_new_tokens=50)
20
  ```
21