pszemraj commited on
Commit
3b6f2da
1 Parent(s): e4b109a

End of training

Browse files
Files changed (5) hide show
  1. README.md +10 -8
  2. all_results.json +20 -0
  3. eval_results.json +14 -0
  4. train_results.json +10 -0
  5. trainer_state.json +0 -0
README.md CHANGED
@@ -1,5 +1,7 @@
1
  ---
2
  library_name: transformers
 
 
3
  license: apache-2.0
4
  base_model: pszemraj/tFINE-850m-24x24-v0.4-flan_aug
5
  tags:
@@ -16,15 +18,15 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # tFINE-850m-24x24-v0.4-flan_aug-infinity-instruct-7m-T2T_en-1024-v5
18
 
19
- This model is a fine-tuned version of [pszemraj/tFINE-850m-24x24-v0.4-flan_aug](https://huggingface.co/pszemraj/tFINE-850m-24x24-v0.4-flan_aug) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.1526
22
- - Rouge1: 40.1804
23
- - Rouge2: 23.1008
24
- - Rougel: 32.3484
25
- - Rougelsum: 38.2103
26
- - Gen Len: 422.225
27
- - Num Input Tokens Seen: 421585440
28
 
29
  ## Model description
30
 
 
1
  ---
2
  library_name: transformers
3
+ language:
4
+ - en
5
  license: apache-2.0
6
  base_model: pszemraj/tFINE-850m-24x24-v0.4-flan_aug
7
  tags:
 
18
 
19
  # tFINE-850m-24x24-v0.4-flan_aug-infinity-instruct-7m-T2T_en-1024-v5
20
 
21
+ This model is a fine-tuned version of [pszemraj/tFINE-850m-24x24-v0.4-flan_aug](https://huggingface.co/pszemraj/tFINE-850m-24x24-v0.4-flan_aug) on the pszemraj/infinity-instruct-7m-T2T_en dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 1.1478
24
+ - Rouge1: 38.4805
25
+ - Rouge2: 22.5971
26
+ - Rougel: 31.1093
27
+ - Rougelsum: 36.596
28
+ - Gen Len: 441.475
29
+ - Num Input Tokens Seen: 435513684
30
 
31
  ## Model description
32
 
all_results.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9999924371336737,
3
+ "eval_gen_len": 441.475,
4
+ "eval_loss": 1.1477829217910767,
5
+ "eval_rouge1": 38.4805,
6
+ "eval_rouge2": 22.5971,
7
+ "eval_rougeL": 31.1093,
8
+ "eval_rougeLsum": 36.596,
9
+ "eval_runtime": 1460.0731,
10
+ "eval_samples": 200,
11
+ "eval_samples_per_second": 0.137,
12
+ "eval_steps_per_second": 0.034,
13
+ "num_input_tokens_seen": 435513684,
14
+ "total_flos": 2.1022605922963784e+18,
15
+ "train_loss": 1.4536694422503678,
16
+ "train_runtime": 158351.1039,
17
+ "train_samples": 1586697,
18
+ "train_samples_per_second": 10.02,
19
+ "train_steps_per_second": 0.078
20
+ }
eval_results.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9999924371336737,
3
+ "eval_gen_len": 441.475,
4
+ "eval_loss": 1.1477829217910767,
5
+ "eval_rouge1": 38.4805,
6
+ "eval_rouge2": 22.5971,
7
+ "eval_rougeL": 31.1093,
8
+ "eval_rougeLsum": 36.596,
9
+ "eval_runtime": 1460.0731,
10
+ "eval_samples": 200,
11
+ "eval_samples_per_second": 0.137,
12
+ "eval_steps_per_second": 0.034,
13
+ "num_input_tokens_seen": 435513684
14
+ }
train_results.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9999924371336737,
3
+ "num_input_tokens_seen": 435513684,
4
+ "total_flos": 2.1022605922963784e+18,
5
+ "train_loss": 1.4536694422503678,
6
+ "train_runtime": 158351.1039,
7
+ "train_samples": 1586697,
8
+ "train_samples_per_second": 10.02,
9
+ "train_steps_per_second": 0.078
10
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff