File size: 462 Bytes
935b5a4
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
---
tags:
- deepsparse
---

# Sparse MPT-7B-GSM8k - DeepSparse

Sparse finetuned MPT 7b model on GSM8k, pruned to 50% and quantized for inference with DeepSparse

```python
from deepsparse import TextGeneration
model = TextGeneration(model="hf:neuralmagic/mpt-7b-gsm8k-pruned50-quant")
model("There are twice as many boys as girls at Dr. Wertz's school. If there are 60 girls and 5 students to every teacher, how many teachers are there?", max_new_tokens=50)
```