Update README.md
Browse files
README.md
CHANGED
@@ -34,31 +34,31 @@ this is the result of quantizing to 4 bits using [AutoGPTQ](https://github.com/P
|
|
34 |
<strong><span style="font-size: larger;">TUTORIAL🤗</span></strong>
|
35 |
|
36 |
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
|
63 |
|
64 |
|
@@ -93,31 +93,31 @@ Este es el resultado de cuantificar a 4 bits usando [AutoGPTQ](https://github.co
|
|
93 |
<strong><span style="font-size: larger;">TUTORIAL🤗</span></strong>
|
94 |
|
95 |
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
|
122 |
|
123 |
|
|
|
34 |
<strong><span style="font-size: larger;">TUTORIAL🤗</span></strong>
|
35 |
|
36 |
|
37 |
+
Open [the text-generation-webui UI]( https://github.com/oobabooga/text-generation-webui) as normal.
|
38 |
+
|
39 |
+
here is a tutorial how to install the text-generation-webui UI: [tutorial]( https://www.youtube.com/watch?v=lb_lC4XFedU&t).
|
40 |
+
|
41 |
+
Click the Model tab.
|
42 |
+
|
43 |
+
Under Download custom model or LoRA, enter RedXeol/bertin-gpt-j-6B-alpaca-4bit-128g.
|
44 |
+
|
45 |
+
Click Download.
|
46 |
+
|
47 |
+
Wait until it says it's finished downloading.
|
48 |
+
|
49 |
+
Click the Refresh icon next to Model in the top left.
|
50 |
+
|
51 |
+
In the Model drop-down: choose the model you just downloaded, bertin-gpt-j-6B-alpaca-4bit-128g.
|
52 |
+
|
53 |
+
If you see an error in the bottom right, ignore it - it's temporary.
|
54 |
+
|
55 |
+
Fill out the GPTQ parameters on the right: Bits = 4, Groupsize = 128, model_type = gptj
|
56 |
+
|
57 |
+
Click Save settings for this model in the top right.
|
58 |
+
|
59 |
+
Click Reload the Model in the top right.
|
60 |
+
|
61 |
+
Once it says it's loaded, click the Text Generation tab and enter a prompt!
|
62 |
|
63 |
|
64 |
|
|
|
93 |
<strong><span style="font-size: larger;">TUTORIAL🤗</span></strong>
|
94 |
|
95 |
|
96 |
+
Abra la interfaz de usuario [the text-generation-webui UI]( https://github.com/oobabooga/text-generation-webui) normal.
|
97 |
+
|
98 |
+
aquí hay un tutorial de cómo instalar la interfaz de usuario text-generation-webui: [tutorial]( https://www.youtube.com/watch?v=lb_lC4XFedU&t).
|
99 |
+
|
100 |
+
Haga clic en la pestaña Modelo.
|
101 |
+
|
102 |
+
En Descargar modelo personalizado o LoRA, ingrese RedXeol/bertin-gpt-j-6B-alpaca-4bit-128g.
|
103 |
+
|
104 |
+
Haz clic en Descargar.
|
105 |
+
|
106 |
+
Espera hasta que diga que ha terminado de descargarse.
|
107 |
+
|
108 |
+
Haga clic en el icono Actualizar junto a Modelo en la parte superior izquierda.
|
109 |
+
|
110 |
+
En el menú desplegable Modelo: elija el modelo que acaba de descargar, bertin-gpt-j-6B-alpaca-4bit-128g.
|
111 |
+
|
112 |
+
Si ve un error en la parte inferior derecha, ignórelo, es temporal.
|
113 |
+
|
114 |
+
Complete los parámetros GPTQ a la derecha: Bits = 4, Groupsize = 128, model_type = gptj
|
115 |
+
|
116 |
+
Haz clic en Guardar configuración para este modelo en la parte superior derecha.
|
117 |
+
|
118 |
+
Haga clic en Recargar el modelo en la parte superior derecha.
|
119 |
+
|
120 |
+
Una vez que diga que está cargado, haga clic en la pestaña Generación de texto e ingrese un mensaje.
|
121 |
|
122 |
|
123 |
|