durgaphaniteja985 commited on
Commit
0c7e66c
1 Parent(s): 06ad6e4

Training in progress epoch 0

Browse files
Files changed (4) hide show
  1. README.md +55 -0
  2. config.json +0 -1
  3. generation_config.json +7 -0
  4. tf_model.h5 +3 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google/mt5-small
5
+ tags:
6
+ - generated_from_keras_callback
7
+ model-index:
8
+ - name: durgaphaniteja985/mt5-small-finetuned-amazon-en-es
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information Keras had access to. You should
13
+ probably proofread and complete it, then remove this comment. -->
14
+
15
+ # durgaphaniteja985/mt5-small-finetuned-amazon-en-es
16
+
17
+ This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Train Loss: 12.6767
20
+ - Validation Loss: 6.4884
21
+ - Epoch: 0
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 6160, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
41
+ - training_precision: mixed_float16
42
+
43
+ ### Training results
44
+
45
+ | Train Loss | Validation Loss | Epoch |
46
+ |:----------:|:---------------:|:-----:|
47
+ | 12.6767 | 6.4884 | 0 |
48
+
49
+
50
+ ### Framework versions
51
+
52
+ - Transformers 4.46.2
53
+ - TensorFlow 2.17.1
54
+ - Datasets 3.1.0
55
+ - Tokenizers 0.20.3
config.json CHANGED
@@ -25,7 +25,6 @@
25
  "relative_attention_num_buckets": 32,
26
  "tie_word_embeddings": false,
27
  "tokenizer_class": "T5Tokenizer",
28
- "torch_dtype": "float32",
29
  "transformers_version": "4.46.2",
30
  "use_cache": true,
31
  "vocab_size": 250112
 
25
  "relative_attention_num_buckets": 32,
26
  "tie_word_embeddings": false,
27
  "tokenizer_class": "T5Tokenizer",
 
28
  "transformers_version": "4.46.2",
29
  "use_cache": true,
30
  "vocab_size": 250112
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "decoder_start_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.46.2"
7
+ }
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1354863d4dd2878aa164bf90427e66a7c2bc630863bf500e4e00b16a28bba6c1
3
+ size 2225556280