ajibawa-2023
commited on
Commit
•
e0e60ef
1
Parent(s):
981454b
Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,7 @@ Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 13
|
|
20 |
|
21 |
This is a full fine tuned model. Links for quantized models are given below.
|
22 |
|
|
|
23 |
**GPTQ GGML & AWQ**
|
24 |
|
25 |
GPTQ: [Link](https://huggingface.co/TheBloke/Python-Code-13B-GPTQ)
|
|
|
20 |
|
21 |
This is a full fine tuned model. Links for quantized models are given below.
|
22 |
|
23 |
+
|
24 |
**GPTQ GGML & AWQ**
|
25 |
|
26 |
GPTQ: [Link](https://huggingface.co/TheBloke/Python-Code-13B-GPTQ)
|