JosephusCheung
commited on
Commit
•
3955950
1
Parent(s):
2941aff
Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,10 @@ license: gpl-3.0
|
|
3 |
---
|
4 |
# NOTE: This is not an official version of Qwen.
|
5 |
|
|
|
|
|
|
|
|
|
6 |
For details, please refer to the previous 14B & 7B versions: [https://huggingface.co/CausalLM/14B](https://huggingface.co/CausalLM/14B)
|
7 |
|
8 |
Testing only, no performance guaranteeeee...
|
|
|
3 |
---
|
4 |
# NOTE: This is not an official version of Qwen.
|
5 |
|
6 |
+
Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization should be fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
|
7 |
+
|
8 |
+
*Do not use wikitext for recalibration.*
|
9 |
+
|
10 |
For details, please refer to the previous 14B & 7B versions: [https://huggingface.co/CausalLM/14B](https://huggingface.co/CausalLM/14B)
|
11 |
|
12 |
Testing only, no performance guaranteeeee...
|