danielhanchen's picture
Upload config
9b4f241 verified
metadata
language:
  - en
library_name: transformers
license: apache-2.0
tags:
  - unsloth
  - transformers
  - mistral
  - mistral-7b
  - bnb

Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!

Directly quantized 4bit model with bitsandbytes. Original source: https://huggingface.co/alpindale/Mistral-7B-v0.2-hf/tree/main used to create the 4bit quantized versions.

We have a Google Colab Tesla T4 notebook for Mistral 7b v2 (32K context length) here: https://colab.research.google.com/drive/1Fa8QVleamfNELceNM9n7SeAGr_hT5XIn?usp=sharing

✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supports Free Notebooks Performance Memory use
Gemma 7b ▶️ Start on Colab 2.4x faster 58% less
Mistral 7b ▶️ Start on Colab 2.2x faster 62% less
Llama-2 7b ▶️ Start on Colab 2.2x faster 43% less
TinyLlama ▶️ Start on Colab 3.9x faster 74% less
CodeLlama 34b A100 ▶️ Start on Colab 1.9x faster 27% less
Mistral 7b 1xT4 ▶️ Start on Kaggle 5x faster* 62% less
DPO - Zephyr ▶️ Start on Colab 1.9x faster 19% less