Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
|
3 |
tags:
|
4 |
- llama-factory
|
5 |
- generated_from_trainer
|
@@ -13,22 +13,6 @@ should probably proofread and complete it, then remove this comment. -->
|
|
13 |
|
14 |
# KIEval-Experiments-SFT-Cheater
|
15 |
|
16 |
-
This model is a fine-tuned version of [/nvme/gc/interactive-eval/LLaMA-Factory/outputs/llama2-7b-poi-sft/epoch3](https://huggingface.co//nvme/gc/interactive-eval/LLaMA-Factory/outputs/llama2-7b-poi-sft/epoch3) on the sharegpt_hyper dataset.
|
17 |
-
|
18 |
-
## Model description
|
19 |
-
|
20 |
-
More information needed
|
21 |
-
|
22 |
-
## Intended uses & limitations
|
23 |
-
|
24 |
-
More information needed
|
25 |
-
|
26 |
-
## Training and evaluation data
|
27 |
-
|
28 |
-
More information needed
|
29 |
-
|
30 |
-
## Training procedure
|
31 |
-
|
32 |
### Training hyperparameters
|
33 |
|
34 |
The following hyperparameters were used during training:
|
@@ -43,11 +27,7 @@ The following hyperparameters were used during training:
|
|
43 |
- total_eval_batch_size: 32
|
44 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
45 |
- lr_scheduler_type: cosine
|
46 |
-
- num_epochs:
|
47 |
-
|
48 |
-
### Training results
|
49 |
-
|
50 |
-
|
51 |
|
52 |
### Framework versions
|
53 |
|
|
|
1 |
---
|
2 |
+
base-model: meta-llama/Llama-2-7b-hf
|
3 |
tags:
|
4 |
- llama-factory
|
5 |
- generated_from_trainer
|
|
|
13 |
|
14 |
# KIEval-Experiments-SFT-Cheater
|
15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
### Training hyperparameters
|
17 |
|
18 |
The following hyperparameters were used during training:
|
|
|
27 |
- total_eval_batch_size: 32
|
28 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
29 |
- lr_scheduler_type: cosine
|
30 |
+
- num_epochs: 4.0
|
|
|
|
|
|
|
|
|
31 |
|
32 |
### Framework versions
|
33 |
|