RichardErkhov
commited on
Commit
•
f548727
1
Parent(s):
dd6cdc2
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
gemma-2-2b - GGUF
|
11 |
+
- Model creator: https://huggingface.co/unsloth/
|
12 |
+
- Original model: https://huggingface.co/unsloth/gemma-2-2b/
|
13 |
+
|
14 |
+
|
15 |
+
| Name | Quant method | Size |
|
16 |
+
| ---- | ---- | ---- |
|
17 |
+
| [gemma-2-2b.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q2_K.gguf) | Q2_K | 1.15GB |
|
18 |
+
| [gemma-2-2b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.IQ3_XS.gguf) | IQ3_XS | 1.22GB |
|
19 |
+
| [gemma-2-2b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.IQ3_S.gguf) | IQ3_S | 1.27GB |
|
20 |
+
| [gemma-2-2b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q3_K_S.gguf) | Q3_K_S | 1.27GB |
|
21 |
+
| [gemma-2-2b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.IQ3_M.gguf) | IQ3_M | 1.3GB |
|
22 |
+
| [gemma-2-2b.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q3_K.gguf) | Q3_K | 1.36GB |
|
23 |
+
| [gemma-2-2b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q3_K_M.gguf) | Q3_K_M | 1.36GB |
|
24 |
+
| [gemma-2-2b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q3_K_L.gguf) | Q3_K_L | 1.44GB |
|
25 |
+
| [gemma-2-2b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.IQ4_XS.gguf) | IQ4_XS | 1.47GB |
|
26 |
+
| [gemma-2-2b.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q4_0.gguf) | Q4_0 | 1.52GB |
|
27 |
+
| [gemma-2-2b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.IQ4_NL.gguf) | IQ4_NL | 1.53GB |
|
28 |
+
| [gemma-2-2b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q4_K_S.gguf) | Q4_K_S | 1.53GB |
|
29 |
+
| [gemma-2-2b.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q4_K.gguf) | Q4_K | 1.59GB |
|
30 |
+
| [gemma-2-2b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q4_K_M.gguf) | Q4_K_M | 1.59GB |
|
31 |
+
| [gemma-2-2b.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q4_1.gguf) | Q4_1 | 1.64GB |
|
32 |
+
| [gemma-2-2b.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q5_0.gguf) | Q5_0 | 1.75GB |
|
33 |
+
| [gemma-2-2b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q5_K_S.gguf) | Q5_K_S | 1.75GB |
|
34 |
+
| [gemma-2-2b.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q5_K.gguf) | Q5_K | 1.79GB |
|
35 |
+
| [gemma-2-2b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q5_K_M.gguf) | Q5_K_M | 1.79GB |
|
36 |
+
| [gemma-2-2b.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q5_1.gguf) | Q5_1 | 1.87GB |
|
37 |
+
| [gemma-2-2b.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q6_K.gguf) | Q6_K | 2.0GB |
|
38 |
+
| [gemma-2-2b.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q8_0.gguf) | Q8_0 | 2.59GB |
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
|
43 |
+
Original model description:
|
44 |
+
---
|
45 |
+
language:
|
46 |
+
- en
|
47 |
+
library_name: transformers
|
48 |
+
license: gemma
|
49 |
+
tags:
|
50 |
+
- unsloth
|
51 |
+
- transformers
|
52 |
+
- gemma2
|
53 |
+
- gemma
|
54 |
+
---
|
55 |
+
|
56 |
+
## Reminder to use the dev version Transformers:
|
57 |
+
`pip install git+https://github.com/huggingface/transformers.git`
|
58 |
+
|
59 |
+
# Finetune Gemma 2, Llama 3.1, Mistral 2-5x faster with 70% less memory via Unsloth!
|
60 |
+
|
61 |
+
Directly quantized 4bit model with `bitsandbytes`.
|
62 |
+
|
63 |
+
We have a Google Colab Tesla T4 notebook for **Gemma 2 (2B)** here: https://colab.research.google.com/drive/1weTpKOjBZxZJ5PQ-Ql8i6ptAY2x-FWVA?usp=sharing
|
64 |
+
|
65 |
+
We have a Google Colab Tesla T4 notebook for **Gemma 2 (9B)** here: https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing
|
66 |
+
|
67 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
|
68 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
|
69 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
70 |
+
|
71 |
+
## ✨ Finetune for Free
|
72 |
+
|
73 |
+
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
|
74 |
+
|
75 |
+
| Unsloth supports | Free Notebooks | Performance | Memory use |
|
76 |
+
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
|
77 |
+
| **Llama 3 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less |
|
78 |
+
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2x faster | 63% less |
|
79 |
+
| **Mistral (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
|
80 |
+
| **Phi 3 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 63% less |
|
81 |
+
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
|
82 |
+
| **CodeLlama (34B)** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
|
83 |
+
| **Mistral (7B)** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
|
84 |
+
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
|
85 |
+
|
86 |
+
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
|
87 |
+
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
|
88 |
+
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
89 |
+
|