File size: 302 Bytes
ef067d2
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
---
license: apache-2.0
---
Directly quantized 4bit model with `bitsandbytes`.

Unsloth can finetune LLMs with QLoRA 2.2x faster and use 62% less memory!

We have a Google Colab Tesla T4 notebook for Mistral 7b here: https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing