danielhanchen's picture
Upload MistralForCausalLM
1fef7cd verified
|
raw
history blame
3.44 kB
metadata
language:
  - en
library_name: transformers
license: mit
tags:
  - unsloth
  - phi3
  - transformers
  - phi

Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!

Directly quantized 4bit model with bitsandbytes.

We have a Google Colab Tesla T4 notebook for Phi-3 Medium here: https://colab.research.google.com/drive/1hhdhBa1j_hsymiW9m-WzxQtgqTH_NHqi?usp=sharing

We have a Google Colab Tesla T4 notebook for Phi-3 Mini here: https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing

✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supports Free Notebooks Performance Memory use
Llama-3 8b ▶️ Start on Colab 2.4x faster 58% less
Gemma 7b ▶️ Start on Colab 2.4x faster 58% less
Mistral 7b ▶️ Start on Colab 2.2x faster 62% less
Llama-2 7b ▶️ Start on Colab 2.2x faster 43% less
TinyLlama ▶️ Start on Colab 3.9x faster 74% less
CodeLlama 34b A100 ▶️ Start on Colab 1.9x faster 27% less
Mistral 7b 1xT4 ▶️ Start on Kaggle 5x faster* 62% less
DPO - Zephyr ▶️ Start on Colab 1.9x faster 19% less