Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,13 @@
|
|
1 |
<p><strong><font size="5">Information</font></strong></p>
|
2 |
-
Alpaca 30B 4-bit working with
|
3 |
<p>Quantized using <i>--true-sequential</i> and <i>--act-order</i> optimizations.</p>
|
4 |
This was made using Chansung's 30B Alpaca Lora: https://huggingface.co/chansung/alpaca-lora-30b
|
5 |
|
|
|
|
|
|
|
|
|
|
|
6 |
<p><strong><font size="5">Benchmarks</font></strong></p>
|
7 |
|
8 |
<strong>Wikitext2</strong>: 4.58
|
@@ -11,4 +16,4 @@ This was made using Chansung's 30B Alpaca Lora: https://huggingface.co/chansung/
|
|
11 |
|
12 |
<strong>C4</strong>: 6.32
|
13 |
|
14 |
-
<strong>Note</strong>: This version does not use <i>--groupsize 128</i>, therefore evaluations are minimally higher.
|
|
|
1 |
<p><strong><font size="5">Information</font></strong></p>
|
2 |
+
Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.
|
3 |
<p>Quantized using <i>--true-sequential</i> and <i>--act-order</i> optimizations.</p>
|
4 |
This was made using Chansung's 30B Alpaca Lora: https://huggingface.co/chansung/alpaca-lora-30b
|
5 |
|
6 |
+
<p><strong><font size="5">Update 04.06.2023</font></strong></p>
|
7 |
+
<p>This is a more recent merge of Chansung's Alpaca Lora which was updated using the clean alpaca dataset as of 04/06/2023 with refined training parameters</p>
|
8 |
+
<p><strong>Training Parameters</strong></p>
|
9 |
+
<ul><li>num_epochs=10</li><li>cutoff_len=512</li><li>group_by_length</li><li>lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'</li><li>lora_r=16</li><li>micro_batch_size=8</li></ul>
|
10 |
+
|
11 |
<p><strong><font size="5">Benchmarks</font></strong></p>
|
12 |
|
13 |
<strong>Wikitext2</strong>: 4.58
|
|
|
16 |
|
17 |
<strong>C4</strong>: 6.32
|
18 |
|
19 |
+
<strong>Note</strong>: This version does not use <i>--groupsize 128</i>, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.
|