Update README.md
Browse files
README.md
CHANGED
@@ -27,8 +27,7 @@ But maybe gguf model little bit slower then GPTQ especialy long text.
|
|
27 |
|
28 |
You can use [text-generation-webui](https://github.com/oobabooga/text-generation-webui) to run this model fast(about 16 tokens/s on my RTX 3060) on your local PC.
|
29 |
|
30 |
-
![text-generation-webui-sample](
|
31 |
-
|
32 |
|
33 |
|
34 |
The explanation of [how to install Japanese text-generation-webui is here.](https://webbigdata.jp/post-19926/).
|
|
|
27 |
|
28 |
You can use [text-generation-webui](https://github.com/oobabooga/text-generation-webui) to run this model fast(about 16 tokens/s on my RTX 3060) on your local PC.
|
29 |
|
30 |
+
![text-generation-webui-sample](./text-generation-webui-sample.png "text-generation-webui")
|
|
|
31 |
|
32 |
|
33 |
The explanation of [how to install Japanese text-generation-webui is here.](https://webbigdata.jp/post-19926/).
|