|
Quantization made by Richard Erkhov. |
|
|
|
[Github](https://github.com/RichardErkhov) |
|
|
|
[Discord](https://discord.gg/pvy7H8DZMG) |
|
|
|
[Request more models](https://github.com/RichardErkhov/quant_request) |
|
|
|
|
|
gemma-2-2b - GGUF |
|
- Model creator: https://huggingface.co/unsloth/ |
|
- Original model: https://huggingface.co/unsloth/gemma-2-2b/ |
|
|
|
|
|
| Name | Quant method | Size | |
|
| ---- | ---- | ---- | |
|
| [gemma-2-2b.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q2_K.gguf) | Q2_K | 1.15GB | |
|
| [gemma-2-2b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.IQ3_XS.gguf) | IQ3_XS | 1.22GB | |
|
| [gemma-2-2b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.IQ3_S.gguf) | IQ3_S | 1.27GB | |
|
| [gemma-2-2b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q3_K_S.gguf) | Q3_K_S | 1.27GB | |
|
| [gemma-2-2b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.IQ3_M.gguf) | IQ3_M | 1.3GB | |
|
| [gemma-2-2b.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q3_K.gguf) | Q3_K | 1.36GB | |
|
| [gemma-2-2b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q3_K_M.gguf) | Q3_K_M | 1.36GB | |
|
| [gemma-2-2b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q3_K_L.gguf) | Q3_K_L | 1.44GB | |
|
| [gemma-2-2b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.IQ4_XS.gguf) | IQ4_XS | 1.47GB | |
|
| [gemma-2-2b.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q4_0.gguf) | Q4_0 | 1.52GB | |
|
| [gemma-2-2b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.IQ4_NL.gguf) | IQ4_NL | 1.53GB | |
|
| [gemma-2-2b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q4_K_S.gguf) | Q4_K_S | 1.53GB | |
|
| [gemma-2-2b.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q4_K.gguf) | Q4_K | 1.59GB | |
|
| [gemma-2-2b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q4_K_M.gguf) | Q4_K_M | 1.59GB | |
|
| [gemma-2-2b.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q4_1.gguf) | Q4_1 | 1.64GB | |
|
| [gemma-2-2b.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q5_0.gguf) | Q5_0 | 1.75GB | |
|
| [gemma-2-2b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q5_K_S.gguf) | Q5_K_S | 1.75GB | |
|
| [gemma-2-2b.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q5_K.gguf) | Q5_K | 1.79GB | |
|
| [gemma-2-2b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q5_K_M.gguf) | Q5_K_M | 1.79GB | |
|
| [gemma-2-2b.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q5_1.gguf) | Q5_1 | 1.87GB | |
|
| [gemma-2-2b.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q6_K.gguf) | Q6_K | 2.0GB | |
|
| [gemma-2-2b.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_gemma-2-2b-gguf/blob/main/gemma-2-2b.Q8_0.gguf) | Q8_0 | 2.59GB | |
|
|
|
|
|
|
|
|
|
Original model description: |
|
--- |
|
language: |
|
- en |
|
library_name: transformers |
|
license: gemma |
|
tags: |
|
- unsloth |
|
- transformers |
|
- gemma2 |
|
- gemma |
|
--- |
|
|
|
## Reminder to use the dev version Transformers: |
|
`pip install git+https://github.com/huggingface/transformers.git` |
|
|
|
# Finetune Gemma 2, Llama 3.1, Mistral 2-5x faster with 70% less memory via Unsloth! |
|
|
|
Directly quantized 4bit model with `bitsandbytes`. |
|
|
|
We have a Google Colab Tesla T4 notebook for **Gemma 2 (2B)** here: https://colab.research.google.com/drive/1weTpKOjBZxZJ5PQ-Ql8i6ptAY2x-FWVA?usp=sharing |
|
|
|
We have a Google Colab Tesla T4 notebook for **Gemma 2 (9B)** here: https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) |
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) |
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
## ✨ Finetune for Free |
|
|
|
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. |
|
|
|
| Unsloth supports | Free Notebooks | Performance | Memory use | |
|
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| |
|
| **Llama 3 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less | |
|
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2x faster | 63% less | |
|
| **Mistral (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | |
|
| **Phi 3 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 63% less | |
|
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | |
|
| **CodeLlama (34B)** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | |
|
| **Mistral (7B)** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | |
|
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | |
|
|
|
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. |
|
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. |
|
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster. |
|
|
|
|