File size: 7,335 Bytes
85fba1c d337ba5 85fba1c d645297 250df50 3f5062e 4509c6b 85fba1c 53fd908 d337ba5 564770a d337ba5 834fd67 8f1a49b b45bfa6 1b620d1 78fea38 8b86493 d785532 6605197 c7bb6c1 3182698 f6e146a 1b7fedb a39c47d 1011fba 68c2dfb 85fba1c 37864e5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
---
base_model: OpenLLM-Ro/RoLlama2-7b-Base
inference: false
language:
- ro
library_name: gguf
license: llama2
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
---
# RoLlama2-7b-Base-IMat-GGUF
_Llama.cpp imatrix quantization of RoLlama2-7b-Base-IMat-GGUF_
Original Model: [OpenLLM-Ro/RoLlama2-7b-Base](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base)
Original dtype: `FP32` (`float32`)
Quantized by: llama.cpp [b2998](https://github.com/ggerganov/llama.cpp/releases/tag/b2998)
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
## Files
### IMatrix
Status: β
Available
Link: [here](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [RoLlama2-7b-Base.Q8_0.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q8_0.gguf) | Q8_0 | 7.16GB | β
Available | No | π¦ No
| [RoLlama2-7b-Base.Q6_K.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q6_K.gguf) | Q6_K | 5.53GB | β
Available | No | π¦ No
| [RoLlama2-7b-Base.Q4_K.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q4_K.gguf) | Q4_K | 4.08GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.Q3_K.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q3_K.gguf) | Q3_K | 3.30GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.Q2_K.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q2_K.gguf) | Q2_K | 2.53GB | β
Available | Yes | π¦ No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [RoLlama2-7b-Base.FP16.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.FP16.gguf) | F16 | 13.48GB | β
Available | No | π¦ No
| [RoLlama2-7b-Base.BF16.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.BF16.gguf) | BF16 | 13.48GB | β
Available | No | π¦ No
| [RoLlama2-7b-Base.Q5_K.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q5_K.gguf) | Q5_K | 4.78GB | β
Available | No | π¦ No
| [RoLlama2-7b-Base.Q5_K_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q5_K_S.gguf) | Q5_K_S | 4.65GB | β
Available | No | π¦ No
| [RoLlama2-7b-Base.Q4_K_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q4_K_S.gguf) | Q4_K_S | 3.86GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.Q3_K_L.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q3_K_L.gguf) | Q3_K_L | 3.60GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.Q3_K_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q3_K_S.gguf) | Q3_K_S | 2.95GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.Q2_K_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.Q2_K_S.gguf) | Q2_K_S | 2.32GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ4_NL.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ4_NL.gguf) | IQ4_NL | 3.83GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ4_XS.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ4_XS.gguf) | IQ4_XS | 3.62GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ3_M.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ3_M.gguf) | IQ3_M | 3.11GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ3_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ3_S.gguf) | IQ3_S | 2.95GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ3_XS.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ3_XS.gguf) | IQ3_XS | 2.80GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ3_XXS.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ3_XXS.gguf) | IQ3_XXS | 2.59GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ2_M.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ2_M.gguf) | IQ2_M | 2.36GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ2_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ2_S.gguf) | IQ2_S | 2.20GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ2_XS.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ2_XS.gguf) | IQ2_XS | 2.03GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ2_XXS.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ2_XXS.gguf) | IQ2_XXS | 1.85GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ1_M.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ1_M.gguf) | IQ1_M | 1.65GB | β
Available | Yes | π¦ No
| [RoLlama2-7b-Base.IQ1_S.gguf](https://huggingface.co/legraphista/RoLlama2-7b-Base-IMat-GGUF/blob/main/RoLlama2-7b-Base.IQ1_S.gguf) | IQ1_S | 1.53GB | β
Available | Yes | π¦ No
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download legraphista/RoLlama2-7b-Base-IMat-GGUF --include "RoLlama2-7b-Base.Q8_0.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/RoLlama2-7b-Base-IMat-GGUF --include "RoLlama2-7b-Base.Q8_0/*" --local-dir RoLlama2-7b-Base.Q8_0
# see FAQ for merging GGUF's
```
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `RoLlama2-7b-Base.Q8_0`)
3. Run `gguf-split --merge RoLlama2-7b-Base.Q8_0/RoLlama2-7b-Base.Q8_0-00001-of-XXXXX.gguf RoLlama2-7b-Base.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |