nthakur commited on
Commit
53eb465
1 Parent(s): e76792a

Model save

Browse files
Files changed (4) hide show
  1. README.md +82 -0
  2. all_results.json +9 -0
  3. train_results.json +9 -0
  4. trainer_state.json +0 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
3
+ library_name: peft
4
+ license: llama3
5
+ tags:
6
+ - trl
7
+ - sft
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: Meta-Llama-3-8B-Instruct-mirage-meta-llama-3-sft-instruct
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # Meta-Llama-3-8B-Instruct-mirage-meta-llama-3-sft-instruct
18
+
19
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.2432
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0002
41
+ - train_batch_size: 2
42
+ - eval_batch_size: 2
43
+ - seed: 42
44
+ - distributed_type: multi-GPU
45
+ - num_devices: 4
46
+ - gradient_accumulation_steps: 2
47
+ - total_train_batch_size: 16
48
+ - total_eval_batch_size: 8
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: cosine
51
+ - lr_scheduler_warmup_ratio: 0.1
52
+ - num_epochs: 1
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss |
57
+ |:-------------:|:------:|:----:|:---------------:|
58
+ | 0.3403 | 0.0597 | 200 | 0.3074 |
59
+ | 0.3224 | 0.1195 | 400 | 0.2954 |
60
+ | 0.3055 | 0.1792 | 600 | 0.2886 |
61
+ | 0.2899 | 0.2389 | 800 | 0.2804 |
62
+ | 0.3116 | 0.2987 | 1000 | 0.2772 |
63
+ | 0.3101 | 0.3584 | 1200 | 0.2728 |
64
+ | 0.2913 | 0.4182 | 1400 | 0.2679 |
65
+ | 0.2765 | 0.4779 | 1600 | 0.2625 |
66
+ | 0.2697 | 0.5376 | 1800 | 0.2601 |
67
+ | 0.2759 | 0.5974 | 2000 | 0.2557 |
68
+ | 0.264 | 0.6571 | 2200 | 0.2524 |
69
+ | 0.2705 | 0.7168 | 2400 | 0.2490 |
70
+ | 0.2694 | 0.7766 | 2600 | 0.2466 |
71
+ | 0.2639 | 0.8363 | 2800 | 0.2450 |
72
+ | 0.2598 | 0.8961 | 3000 | 0.2435 |
73
+ | 0.2483 | 0.9558 | 3200 | 0.2432 |
74
+
75
+
76
+ ### Framework versions
77
+
78
+ - PEFT 0.10.0
79
+ - Transformers 4.44.0
80
+ - Pytorch 2.4.0+cu121
81
+ - Datasets 2.20.0
82
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 1.6414748941746176e+16,
4
+ "train_loss": 0.27713136451503567,
5
+ "train_runtime": 31532.8493,
6
+ "train_samples": 53566,
7
+ "train_samples_per_second": 1.699,
8
+ "train_steps_per_second": 0.106
9
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 1.6414748941746176e+16,
4
+ "train_loss": 0.27713136451503567,
5
+ "train_runtime": 31532.8493,
6
+ "train_samples": 53566,
7
+ "train_samples_per_second": 1.699,
8
+ "train_steps_per_second": 0.106
9
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff