|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- unsloth |
|
- transformers |
|
- mistral |
|
- mistral7b |
|
- bnb |
|
|
|
--- |
|
|
|
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! |
|
|
|
Directly quantized 4bit model with `bitsandbytes`. |
|
|
|
We have a Google Colab Tesla T4 notebook for Mistral 7b here: https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) |
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) |
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="400"/>](https://github.com/unslothai/unsloth) |
|
|
|
|