Edit model card

Twitter GitHub LinkedIn Discord

Simply make AI models cheaper, smaller, faster, and greener!

  • Give a thumbs up if you like this model!
  • Contact us and tell us which model to compress next here.
  • Request access to easily compress your own AI models here.
  • Read the documentations to know more here
  • Join Pruna AI community on Discord here to share feedback/suggestions or get help.

Frequently Asked Questions

  • How does the compression work? The model is compressed by using bitsandbytes.
  • How does the model quality change? The quality of the model output will slightly degrade.
  • What is the model format? We the standard safetensors format.
  • How to compress my own models? You can request premium access to more compression methods and tech support for your specific use-cases here.

Credits & License

The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-2.8-mistral-7b-v02 before using this model which provided the base model. The license of the pruna-engine is here on Pypi.

Want to compress other models?

  • Contact us and tell us which model to compress next here.
  • Request access to easily compress your own AI models here.
Downloads last month
3
Safetensors
Model size
3.8B params
Tensor type
F32
·
BF16
·
U8
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Collection including PrunaAI/dolphin-2.8-mistral-7b-v02-bnb-4bit