measurement.json
Browse files- README.md +174 -0
- measurement.json +0 -0
README.md
ADDED
@@ -0,0 +1,174 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- merge
|
5 |
+
- mergekit
|
6 |
+
- lazymergekit
|
7 |
+
- Locutusque/Hercules-2.5-Mistral-7B
|
8 |
+
- openchat/openchat-3.5-0106
|
9 |
+
base_model:
|
10 |
+
- Locutusque/Hercules-2.5-Mistral-7B
|
11 |
+
- openchat/openchat-3.5-0106
|
12 |
+
model-index:
|
13 |
+
- name: ChatHercules-2.5-Mistral-7B
|
14 |
+
results:
|
15 |
+
- task:
|
16 |
+
type: text-generation
|
17 |
+
name: Text Generation
|
18 |
+
dataset:
|
19 |
+
name: AI2 Reasoning Challenge (25-Shot)
|
20 |
+
type: ai2_arc
|
21 |
+
config: ARC-Challenge
|
22 |
+
split: test
|
23 |
+
args:
|
24 |
+
num_few_shot: 25
|
25 |
+
metrics:
|
26 |
+
- type: acc_norm
|
27 |
+
value: 65.1
|
28 |
+
name: normalized accuracy
|
29 |
+
source:
|
30 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hydra-project/ChatHercules-2.5-Mistral-7B
|
31 |
+
name: Open LLM Leaderboard
|
32 |
+
- task:
|
33 |
+
type: text-generation
|
34 |
+
name: Text Generation
|
35 |
+
dataset:
|
36 |
+
name: HellaSwag (10-Shot)
|
37 |
+
type: hellaswag
|
38 |
+
split: validation
|
39 |
+
args:
|
40 |
+
num_few_shot: 10
|
41 |
+
metrics:
|
42 |
+
- type: acc_norm
|
43 |
+
value: 84.61
|
44 |
+
name: normalized accuracy
|
45 |
+
source:
|
46 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hydra-project/ChatHercules-2.5-Mistral-7B
|
47 |
+
name: Open LLM Leaderboard
|
48 |
+
- task:
|
49 |
+
type: text-generation
|
50 |
+
name: Text Generation
|
51 |
+
dataset:
|
52 |
+
name: MMLU (5-Shot)
|
53 |
+
type: cais/mmlu
|
54 |
+
config: all
|
55 |
+
split: test
|
56 |
+
args:
|
57 |
+
num_few_shot: 5
|
58 |
+
metrics:
|
59 |
+
- type: acc
|
60 |
+
value: 65.35
|
61 |
+
name: accuracy
|
62 |
+
source:
|
63 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hydra-project/ChatHercules-2.5-Mistral-7B
|
64 |
+
name: Open LLM Leaderboard
|
65 |
+
- task:
|
66 |
+
type: text-generation
|
67 |
+
name: Text Generation
|
68 |
+
dataset:
|
69 |
+
name: TruthfulQA (0-shot)
|
70 |
+
type: truthful_qa
|
71 |
+
config: multiple_choice
|
72 |
+
split: validation
|
73 |
+
args:
|
74 |
+
num_few_shot: 0
|
75 |
+
metrics:
|
76 |
+
- type: mc2
|
77 |
+
value: 47.52
|
78 |
+
source:
|
79 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hydra-project/ChatHercules-2.5-Mistral-7B
|
80 |
+
name: Open LLM Leaderboard
|
81 |
+
- task:
|
82 |
+
type: text-generation
|
83 |
+
name: Text Generation
|
84 |
+
dataset:
|
85 |
+
name: Winogrande (5-shot)
|
86 |
+
type: winogrande
|
87 |
+
config: winogrande_xl
|
88 |
+
split: validation
|
89 |
+
args:
|
90 |
+
num_few_shot: 5
|
91 |
+
metrics:
|
92 |
+
- type: acc
|
93 |
+
value: 81.85
|
94 |
+
name: accuracy
|
95 |
+
source:
|
96 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hydra-project/ChatHercules-2.5-Mistral-7B
|
97 |
+
name: Open LLM Leaderboard
|
98 |
+
- task:
|
99 |
+
type: text-generation
|
100 |
+
name: Text Generation
|
101 |
+
dataset:
|
102 |
+
name: GSM8k (5-shot)
|
103 |
+
type: gsm8k
|
104 |
+
config: main
|
105 |
+
split: test
|
106 |
+
args:
|
107 |
+
num_few_shot: 5
|
108 |
+
metrics:
|
109 |
+
- type: acc
|
110 |
+
value: 64.97
|
111 |
+
name: accuracy
|
112 |
+
source:
|
113 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=hydra-project/ChatHercules-2.5-Mistral-7B
|
114 |
+
name: Open LLM Leaderboard
|
115 |
+
quantized_by: bartowski
|
116 |
+
pipeline_tag: text-generation
|
117 |
+
---
|
118 |
+
|
119 |
+
## Exllama v2 Quantizations of ChatHercules-2.5-Mistral-7B
|
120 |
+
|
121 |
+
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.14">turboderp's ExLlamaV2 v0.0.14</a> for quantization.
|
122 |
+
|
123 |
+
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
|
124 |
+
|
125 |
+
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
|
126 |
+
|
127 |
+
Original model: https://huggingface.co/hydra-project/ChatHercules-2.5-Mistral-7B
|
128 |
+
|
129 |
+
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
|
130 |
+
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
|
131 |
+
| [8_0](https://huggingface.co/bartowski/ChatHercules-2.5-Mistral-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
|
132 |
+
| [6_5](https://huggingface.co/bartowski/ChatHercules-2.5-Mistral-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
|
133 |
+
| [5_0](https://huggingface.co/bartowski/ChatHercules-2.5-Mistral-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
|
134 |
+
| [4_25](https://huggingface.co/bartowski/ChatHercules-2.5-Mistral-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
|
135 |
+
| [3_5](https://huggingface.co/bartowski/ChatHercules-2.5-Mistral-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
|
136 |
+
|
137 |
+
## Download instructions
|
138 |
+
|
139 |
+
With git:
|
140 |
+
|
141 |
+
```shell
|
142 |
+
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/ChatHercules-2.5-Mistral-7B-exl2 ChatHercules-2.5-Mistral-7B-exl2-6_5
|
143 |
+
```
|
144 |
+
|
145 |
+
With huggingface hub (credit to TheBloke for instructions):
|
146 |
+
|
147 |
+
```shell
|
148 |
+
pip3 install huggingface-hub
|
149 |
+
```
|
150 |
+
|
151 |
+
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `ChatHercules-2.5-Mistral-7B-exl2`:
|
152 |
+
|
153 |
+
```shell
|
154 |
+
mkdir ChatHercules-2.5-Mistral-7B-exl2
|
155 |
+
huggingface-cli download bartowski/ChatHercules-2.5-Mistral-7B-exl2 --local-dir ChatHercules-2.5-Mistral-7B-exl2 --local-dir-use-symlinks False
|
156 |
+
```
|
157 |
+
|
158 |
+
To download from a different branch, add the `--revision` parameter:
|
159 |
+
|
160 |
+
Linux:
|
161 |
+
|
162 |
+
```shell
|
163 |
+
mkdir ChatHercules-2.5-Mistral-7B-exl2-6_5
|
164 |
+
huggingface-cli download bartowski/ChatHercules-2.5-Mistral-7B-exl2 --revision 6_5 --local-dir ChatHercules-2.5-Mistral-7B-exl2-6_5 --local-dir-use-symlinks False
|
165 |
+
```
|
166 |
+
|
167 |
+
Windows (which apparently doesn't like _ in folders sometimes?):
|
168 |
+
|
169 |
+
```shell
|
170 |
+
mkdir ChatHercules-2.5-Mistral-7B-exl2-6.5
|
171 |
+
huggingface-cli download bartowski/ChatHercules-2.5-Mistral-7B-exl2 --revision 6_5 --local-dir ChatHercules-2.5-Mistral-7B-exl2-6.5 --local-dir-use-symlinks False
|
172 |
+
```
|
173 |
+
|
174 |
+
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
measurement.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|