JustinLin610
commited on
Commit
•
60a1a30
1
Parent(s):
3a70652
Update README.md
Browse files
README.md
CHANGED
@@ -75,7 +75,7 @@ generated_ids = [
|
|
75 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
76 |
```
|
77 |
|
78 |
-
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-7B-Chat-GPTQ`, `Qwen1.5-7B-Chat-AWQ`, and `Qwen1.5-7B-Chat-GGUF`.
|
79 |
|
80 |
|
81 |
## Tips
|
|
|
75 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
76 |
```
|
77 |
|
78 |
+
For quantized models, we advise you to use the GPTQ, AWQ, and GGUF correspondents, namely `Qwen1.5-7B-Chat-GPTQ-Int8`, `Qwen1.5-7B-Chat-AWQ`, and `Qwen1.5-7B-Chat-GGUF`.
|
79 |
|
80 |
|
81 |
## Tips
|