ajibawa-2023
commited on
Commit
·
99bfd61
1
Parent(s):
9e90b70
Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,8 @@ It is trained on around 155000 set of conversations. Each set having 10~15 conve
|
|
22 |
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 104 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
|
23 |
Llama-1 was used as it is very useful for Uncensored conversation.
|
24 |
|
|
|
|
|
25 |
**GPTQ GGML & AWQ**
|
26 |
|
27 |
GPTQ: [Link](https://huggingface.co/TheBloke/Uncensored-Jordan-33B-GPTQ)
|
|
|
22 |
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 104 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta.
|
23 |
Llama-1 was used as it is very useful for Uncensored conversation.
|
24 |
|
25 |
+
This is a full fine tuned model. Links for quantized models are given below.
|
26 |
+
|
27 |
**GPTQ GGML & AWQ**
|
28 |
|
29 |
GPTQ: [Link](https://huggingface.co/TheBloke/Uncensored-Jordan-33B-GPTQ)
|