mgoin's picture
Create README.md
3d4f8aa
|
raw
history blame
2.49 kB
metadata
datasets:
  - gsm8k
tags:
  - deepsparse

mpt-7b-gsm8k-pruned40-quant

This model was produced from a MPT-7B base model finetuned on the GSM8k dataset with pruning and quantization applied using SparseGPT. Then it was exported for optimized inference with DeepSparse.

GSM8k zero-shot accuracy with lm-evaluation-harness : 30.33%

Usage

from deepsparse import TextGeneration
model = TextGeneration(model="hf:neuralmagic/mpt-7b-gsm8k-pruned40-quant")
model("There are twice as many boys as girls at Dr. Wertz's school. If there are 60 girls and 5 students to every teacher, how many teachers are there?", max_new_tokens=50)

All MPT model weights are available on SparseZoo and CPU speedup for generative inference can be reproduced by following the instructions at DeepSparse

Model Links Compression
neuralmagic/mpt-7b-gsm8k-quant Quantization (W8A8)
neuralmagic/mpt-7b-gsm8k-pruned40-quant Quantization (W8A8) & 40% Pruning
neuralmagic/mpt-7b-gsm8k-pruned50-quant Quantization (W8A8) & 50% Pruning
neuralmagic/mpt-7b-gsm8k-pruned60-quant Quantization (W8A8) & 60% Pruning
neuralmagic/mpt-7b-gsm8k-pruned70-quant Quantization (W8A8) & 70% Pruning
neuralmagic/mpt-7b-gsm8k-pruned80-quant Quantization (W8A8) & 80% Pruning

For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.