essay generating branch of MHENN. 4-bit quantized model can be found at file "mhennlitQ4_K_M.gguf"
finetuned for 650 steps on an nvidia v100 on a google colab instance. finetuned on the netcat420/quiklit dataset.
https://huggingface.co/mistralai/Mistral-7B-v0.1 <--------- BASE MODEL
- Downloads last month
- 18
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.