ajibawa-2023
commited on
Commit
•
981454b
1
Parent(s):
0ecd4aa
Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,7 @@ I have released the [data](https://huggingface.co/datasets/ajibawa-2023/Python-C
|
|
18 |
**Training:**
|
19 |
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 13 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
|
20 |
|
|
|
21 |
|
22 |
**GPTQ GGML & AWQ**
|
23 |
|
|
|
18 |
**Training:**
|
19 |
Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 13 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-2 by Meta.
|
20 |
|
21 |
+
This is a full fine tuned model. Links for quantized models are given below.
|
22 |
|
23 |
**GPTQ GGML & AWQ**
|
24 |
|