shimmyshimmer
commited on
Commit
•
9ff3dd8
1
Parent(s):
3e7831a
Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,9 @@ tags:
|
|
14 |
- vision
|
15 |
---
|
16 |
|
|
|
|
|
|
|
17 |
# Finetune Llama 3.2, Qwen 2.5, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
|
18 |
|
19 |
We have a free Google Colab Tesla T4 notebook for Qwen2-VL (7B) here: https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing
|
|
|
14 |
- vision
|
15 |
---
|
16 |
|
17 |
+
### ***Unsloth's [Dynamic 4-bit Quants](https://unsloth.ai/blog/dynamic-4bit) selectively avoids quantizing certain parameters, greatly improving accuracy while keeping VRAM usage similar to BnB 4-bit.<br>See our full collection of Unsloth quants on [Hugging Face here.](https://huggingface.co/collections/unsloth/unsloth-4-bit-dynamic-quants-67503bb873f89e15276c44e7)***
|
18 |
+
<br>
|
19 |
+
|
20 |
# Finetune Llama 3.2, Qwen 2.5, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
|
21 |
|
22 |
We have a free Google Colab Tesla T4 notebook for Qwen2-VL (7B) here: https://colab.research.google.com/drive/1whHb54GNZMrNxIsi2wm2EY_-Pvo2QyKh?usp=sharing
|