Holmeister commited on
Commit
4d7e641
1 Parent(s): 2a821e4

End of training

Browse files
README.md CHANGED
@@ -17,7 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.5176
21
 
22
  ## Model description
23
 
@@ -45,18 +45,19 @@ The following hyperparameters were used during training:
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: constant
47
  - lr_scheduler_warmup_ratio: 0.03
48
- - num_epochs: 5
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:-----:|:----:|:---------------:|
55
- | 1.3593 | 0.95 | 15 | 0.6244 |
56
- | 0.5702 | 1.97 | 31 | 0.5447 |
57
- | 0.4962 | 2.98 | 47 | 0.5198 |
58
- | 0.4396 | 4.0 | 63 | 0.5134 |
59
- | 0.384 | 4.76 | 75 | 0.5176 |
 
60
 
61
 
62
  ### Framework versions
@@ -64,5 +65,5 @@ The following hyperparameters were used during training:
64
  - PEFT 0.7.2.dev0
65
  - Transformers 4.36.2
66
  - Pytorch 2.1.0+cu121
67
- - Datasets 2.16.0
68
  - Tokenizers 0.15.0
 
17
 
18
  This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.5185
21
 
22
  ## Model description
23
 
 
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: constant
47
  - lr_scheduler_warmup_ratio: 0.03
48
+ - num_epochs: 6
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss |
54
  |:-------------:|:-----:|:----:|:---------------:|
55
+ | 2.0563 | 0.95 | 15 | 0.9446 |
56
+ | 0.7495 | 1.97 | 31 | 0.6337 |
57
+ | 0.5959 | 2.98 | 47 | 0.5719 |
58
+ | 0.5474 | 4.0 | 63 | 0.5410 |
59
+ | 0.545 | 4.95 | 78 | 0.5240 |
60
+ | 0.4553 | 5.71 | 90 | 0.5185 |
61
 
62
 
63
  ### Framework versions
 
65
  - PEFT 0.7.2.dev0
66
  - Transformers 4.36.2
67
  - Pytorch 2.1.0+cu121
68
+ - Datasets 2.16.1
69
  - Tokenizers 0.15.0
adapter_config.json CHANGED
@@ -20,9 +20,9 @@
20
  "revision": null,
21
  "target_modules": [
22
  "dense",
23
- "dense_h_to_4h",
24
- "dense_4h_to_h",
25
- "query_key_value"
26
  ],
27
  "task_type": "CAUSAL_LM",
28
  "use_rslora": false
 
20
  "revision": null,
21
  "target_modules": [
22
  "dense",
23
+ "query_key_value",
24
+ "dense_h_to_4_h",
25
+ "dense_4_h_to_h"
26
  ],
27
  "task_type": "CAUSAL_LM",
28
  "use_rslora": false
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:74d4e52c33fd50a00d453bd35b12cf2bf47ade2de31aaf5afa7dd04677d05001
3
- size 522227376
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2976acde0ba6371b817d2a0becd75b34b74d37f1c4e4c8772d9532cf5ce9aa61
3
+ size 149964992
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7872a29ef45a3a43cef6a9d884573d338f8a80047dd902959db984d5be362f94
3
  size 4664
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ce3a2b703f5a306e67289df7e1374007ad679f8951a701c161a8fd2fd650cf8
3
  size 4664