anon8231489123
commited on
Commit
•
5a03f4e
1
Parent(s):
d904e50
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,5 @@
|
|
1 |
** Converted model for GPTQ from https://huggingface.co/lmsys/vicuna-13b-delta-v0. This is the best local model I've ever tried. I hope someone makes a version based on the uncensored dataset...**
|
2 |
|
3 |
-
* IMPORTANT NOTE: Use the .safetensors model unless it does not work. In which case, try .pt.
|
4 |
-
|
5 |
GPTQ conversion command (on CUDA branch):
|
6 |
CUDA_VISIBLE_DEVICES=0 python llama.py ../lmsys/vicuna-13b-v0 c4 --wbits 4 --true-sequential --groupsize 128 --save vicuna-13b-4bit-128g.pt
|
7 |
|
|
|
1 |
** Converted model for GPTQ from https://huggingface.co/lmsys/vicuna-13b-delta-v0. This is the best local model I've ever tried. I hope someone makes a version based on the uncensored dataset...**
|
2 |
|
|
|
|
|
3 |
GPTQ conversion command (on CUDA branch):
|
4 |
CUDA_VISIBLE_DEVICES=0 python llama.py ../lmsys/vicuna-13b-v0 c4 --wbits 4 --true-sequential --groupsize 128 --save vicuna-13b-4bit-128g.pt
|
5 |
|