File size: 2,546 Bytes
a3ca77c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8aaea32
a3ca77c
8aaea32
 
 
 
 
 
 
a3ca77c
8aaea32
a3ca77c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8aaea32
 
a3ca77c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- fblgit/UNA-TheBeagle-7b-v1
- argilla/distilabeled-Marcoro14-7B-slerp
- dpo
- rlhf
quantized_by: bartowski
pipeline_tag: text-generation
---

## Exllama v2 Quantizations of NeuralBeagle14-7B

Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.

# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Original model: https://huggingface.co/mlabonne/NeuralBeagle14-7B

Model Size: 7b

| Branch | Bits | lm_head bits | Dataset | Size | Description |
| ----- | ---- | ------- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/NeuralBeagle14-7B-exl2/tree/8_0) | 8.0  | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/NeuralBeagle14-7B-exl2/tree/6_5) | 6.5  | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/NeuralBeagle14-7B-exl2/tree/5_0) | 5.0  | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. |
| [4_0](https://huggingface.co/Bartowski/NeuralBeagle14-7B-exl2/tree/4_0) | 4.0 | 6.0 | Default | 6.5 GB | Just under GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/NeuralBeagle14-7B-exl2/tree/3_5) | 3.5  | 6.0 | Default | 6.1 GB | Lower quality, only use if you have to. |

All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.

## Download instructions

With git:

```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/NeuralBeagle14-7B-exl2
```

With huggingface hub (credit to TheBloke for instructions):

```shell
pip3 install huggingface-hub
```

To download the `main` (only useful if you only care about measurement.json) branch to a folder called `NeuralBeagle14-7B-exl2`:

```shell
mkdir NeuralBeagle14-7B-exl2
huggingface-cli download bartowski/NeuralBeagle14-7B-exl2 --local-dir NeuralBeagle14-7B-exl2 --local-dir-use-symlinks False
```

To download from a different branch, add the `--revision` parameter:

```shell
mkdir NeuralBeagle14-7B-exl2-6_5
huggingface-cli download bartowski/NeuralBeagle14-7B-exl2 --revision 6_5 --local-dir NeuralBeagle14-7B-exl2-6_5 --local-dir-use-symlinks False
```