bnb-4 bit
Collection
A 4-bit quantized version models designed for fine-tuning. Demo: https://colab.research.google.com/drive/19UpFUjtbJoLua-4DMb1JKKwJAcvMyMHb
•
21 items
•
Updated
This model is 4bit quantized version of Mistral-7B-Instruct-v0.2 using bitsandbytes. It's designed for fine-tuning! The PAD token is set as UNK.
Base model
mistralai/Mistral-7B-Instruct-v0.2