foxxy-hm commited on
Commit
d6d73fa
1 Parent(s): 7e42d6c

Training in progress epoch 0

Browse files
Files changed (4) hide show
  1. README.md +4 -4
  2. config.json +30 -0
  3. tf_model.h5 +2 -2
  4. tokenizer.json +2 -2
README.md CHANGED
@@ -14,8 +14,8 @@ probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Train Loss: 4.6679
18
- - Validation Loss: 2.1036
19
  - Epoch: 0
20
 
21
  ## Model description
@@ -35,14 +35,14 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 15856, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
- | 4.6679 | 2.1036 | 0 |
46
 
47
 
48
  ### Framework versions
 
14
 
15
  This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Train Loss: 4.4772
18
+ - Validation Loss: 2.1093
19
  - Epoch: 0
20
 
21
  ## Model description
 
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 16208, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
39
  - training_precision: mixed_float16
40
 
41
  ### Training results
42
 
43
  | Train Loss | Validation Loss | Epoch |
44
  |:----------:|:---------------:|:-----:|
45
+ | 4.4772 | 2.1093 | 0 |
46
 
47
 
48
  ### Framework versions
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/mt5-small",
3
+ "architectures": [
4
+ "MT5ForConditionalGeneration"
5
+ ],
6
+ "d_ff": 1024,
7
+ "d_kv": 64,
8
+ "d_model": 512,
9
+ "decoder_start_token_id": 0,
10
+ "dense_act_fn": "gelu_new",
11
+ "dropout_rate": 0.1,
12
+ "eos_token_id": 1,
13
+ "feed_forward_proj": "gated-gelu",
14
+ "initializer_factor": 1.0,
15
+ "is_encoder_decoder": true,
16
+ "is_gated_act": true,
17
+ "layer_norm_epsilon": 1e-06,
18
+ "model_type": "mt5",
19
+ "num_decoder_layers": 8,
20
+ "num_heads": 6,
21
+ "num_layers": 8,
22
+ "pad_token_id": 0,
23
+ "relative_attention_max_distance": 128,
24
+ "relative_attention_num_buckets": 32,
25
+ "tie_word_embeddings": false,
26
+ "tokenizer_class": "T5Tokenizer",
27
+ "transformers_version": "4.27.2",
28
+ "use_cache": true,
29
+ "vocab_size": 250112
30
+ }
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d76c6274d7eae84b455dc631745228cb511893ae9fcefed0a1225b7522d23c5a
3
- size 2225560376
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc4bbabb12153215f43f604763f41e05d0baf2c4a013c04ba107f5f0826a006a
3
+ size 2225556280
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:af95700bf514fc57a9f58055fec955839f0a378f707ddbc3df840291cbb709db
3
- size 16330466
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:93c3578052e1605d8332eb961bc08d72e246071974e4cc54aa6991826b802aa5
3
+ size 16330369