Update README.md
Browse files
README.md
CHANGED
@@ -7,8 +7,7 @@ CUDA_VISIBLE_DEVICES=0 python llama.py /workspace/LLaVA-13B-v0/ c4 --wbits 4 --t
|
|
7 |
|
8 |
on https://github.com/oobabooga/GPTQ-for-LLaMa CUDA branch of GPTQ (commit `57a2629`)
|
9 |
|
10 |
-
see: https://github.com/oobabooga/text-generation-webui/
|
11 |
-
and a PR which enables inference with images in webui: https://github.com/oobabooga/text-generation-webui/pull/1487
|
12 |
|
13 |
---
|
14 |
license: other
|
|
|
7 |
|
8 |
on https://github.com/oobabooga/GPTQ-for-LLaMa CUDA branch of GPTQ (commit `57a2629`)
|
9 |
|
10 |
+
YOU CAN NOW RUN IT IN [TEXT-GENERATION-WEBUI](https://github.com/oobabooga/text-generation-webui) with `llava` extension (see: https://github.com/oobabooga/text-generation-webui/tree/main/extensions/llava)
|
|
|
11 |
|
12 |
---
|
13 |
license: other
|