Update README.md
Browse files
README.md
CHANGED
@@ -1,28 +1,30 @@
|
|
1 |
---
|
2 |
datasets:
|
3 |
-
-
|
4 |
library_name: peft
|
5 |
tags:
|
6 |
-
-
|
7 |
- code
|
8 |
- instruct
|
9 |
- instruct-code
|
10 |
-
-
|
11 |
-
-
|
12 |
-
-
|
|
|
|
|
13 |
---
|
14 |
|
15 |
-
We finetuned
|
16 |
|
17 |
-
This dataset is
|
18 |
|
19 |
-
The finetuning session
|
20 |
|
21 |
#### Hyperparameters & Run details:
|
22 |
- Model Path: meta-llama/Llama-2-7b
|
23 |
-
- Dataset:
|
24 |
- Learning rate: 0.0003
|
25 |
-
- Number of epochs:
|
26 |
- Data split: Training: 90% / Validation: 10%
|
27 |
- Gradient accumulation steps: 1
|
28 |
|
@@ -31,4 +33,4 @@ Loss metrics:
|
|
31 |
|
32 |
---
|
33 |
license: apache-2.0
|
34 |
-
---
|
|
|
1 |
---
|
2 |
datasets:
|
3 |
+
- sahil2801/CodeAlpaca-20k
|
4 |
library_name: peft
|
5 |
tags:
|
6 |
+
- llama2-7b
|
7 |
- code
|
8 |
- instruct
|
9 |
- instruct-code
|
10 |
+
- code-alpaca
|
11 |
+
- alpaca-instruct
|
12 |
+
- alpaca
|
13 |
+
- llama7b
|
14 |
+
- gpt2
|
15 |
---
|
16 |
|
17 |
+
We finetuned Llama2-7B on Code-Alpaca-Instruct Dataset (sahil2801/CodeAlpaca-20k) for 5 epochs or ~ 25,000 steps using [MonsterAPI](https://monsterapi.ai) no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
|
18 |
|
19 |
+
This dataset is HuggingFaceH4/CodeAlpaca_20K unfiltered, removing 36 instances of blatant alignment.
|
20 |
|
21 |
+
The finetuning session got completed in 4 hours and costed us only `$16` for the entire finetuning run!
|
22 |
|
23 |
#### Hyperparameters & Run details:
|
24 |
- Model Path: meta-llama/Llama-2-7b
|
25 |
+
- Dataset: sahil2801/CodeAlpaca-20k
|
26 |
- Learning rate: 0.0003
|
27 |
+
- Number of epochs: 5
|
28 |
- Data split: Training: 90% / Validation: 10%
|
29 |
- Gradient accumulation steps: 1
|
30 |
|
|
|
33 |
|
34 |
---
|
35 |
license: apache-2.0
|
36 |
+
---
|