Update README.md
Browse files
README.md
CHANGED
@@ -17,9 +17,55 @@ Converted from the XORs weights from PygmalionAI's release https://huggingface.c
|
|
17 |
|
18 |
Quantized for KoboldAI use using https://github.com/0cc4m/GPTQ-for-LLaMa
|
19 |
|
20 |
-
I created several quantized variations of this model and believe this variation to be "best."
|
21 |
-
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
|
24 |
|
25 |
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
|
@@ -85,14 +131,7 @@ The intended use-case for this model is fictional writing for entertainment purp
|
|
85 |
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
86 |
|
87 |
|
88 |
-
<p><strong><font size="5">Benchmarks</font></strong></p>
|
89 |
-
|
90 |
-
<p><strong><font size="4">This Model:</font> <br><font size="4">4 Bit --Act-order</font></strong></p>
|
91 |
-
<strong>Wikitext2</strong>: 6.281311511993408
|
92 |
-
|
93 |
-
<strong>Ptb-New</strong>: 46.79158401489258
|
94 |
-
|
95 |
-
<strong>C4-New</strong>: 7.906069755554199
|
96 |
<hr>
|
97 |
<!DOCTYPE html>
|
98 |
<html>
|
|
|
17 |
|
18 |
Quantized for KoboldAI use using https://github.com/0cc4m/GPTQ-for-LLaMa
|
19 |
|
20 |
+
I created several quantized variations of this model and believe this variation to be "best." <br>
|
21 |
+
<!DOCTYPE html>
|
22 |
+
<html>
|
23 |
+
<head>
|
24 |
+
<title>HTML Table Generator</title>
|
25 |
+
<style>
|
26 |
+
table {
|
27 |
+
border:1px solid #b3adad;
|
28 |
+
border-collapse:collapse;
|
29 |
+
padding:5px;
|
30 |
+
}
|
31 |
+
table th {
|
32 |
+
border:1px solid #b3adad;
|
33 |
+
padding:5px;
|
34 |
+
background: #f0f0f0;
|
35 |
+
color: #313030;
|
36 |
+
}
|
37 |
+
table td {
|
38 |
+
border:1px solid #b3adad;
|
39 |
+
text-align:center;
|
40 |
+
padding:5px;
|
41 |
+
background: #ffffff;
|
42 |
+
color: #313030;
|
43 |
+
}
|
44 |
+
</style>
|
45 |
+
</head>
|
46 |
+
<body>
|
47 |
+
<table>
|
48 |
+
<thead>
|
49 |
+
<tr>
|
50 |
+
<th>GPTQ Variation:</th>
|
51 |
+
<th>Wikitext2</th>
|
52 |
+
<th>Ptb-New</th>
|
53 |
+
<th>C4-New</th>
|
54 |
+
</tr>
|
55 |
+
</thead>
|
56 |
+
<tbody>
|
57 |
+
<tr>
|
58 |
+
<td> --act-order</td>
|
59 |
+
<td> 6.281311511993408</td>
|
60 |
+
<td> 46.79158401489258</td>
|
61 |
+
<td> 7.906069755554199</td>
|
62 |
+
</tr>
|
63 |
+
</tbody>
|
64 |
+
</table>
|
65 |
+
</body>
|
66 |
+
</html>
|
67 |
+
<br>Other benchmark scores at the bottom of readme.
|
68 |
+
<hr>
|
69 |
Metharme 7B is an instruct model based on Meta's LLaMA-7B.
|
70 |
|
71 |
This is an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. See the [prompting](#prompting) section below for examples.
|
|
|
131 |
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
132 |
|
133 |
|
134 |
+
<p><strong><font size="5">Benchmarks of different quantize variations</font></strong></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
135 |
<hr>
|
136 |
<!DOCTYPE html>
|
137 |
<html>
|