arun3936 commited on
Commit
ccf6bf4
1 Parent(s): c31da60

End of training

Browse files
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: peft
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - bleu
8
+ - rouge
9
+ base_model: facebook/nllb-200-3.3B
10
+ model-index:
11
+ - name: nllb-200-3.3B-Malayalam_English_Translationt_nllb6
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # nllb-200-3.3B-Malayalam_English_Translationt_nllb6
19
+
20
+ This model is a fine-tuned version of [facebook/nllb-200-3.3B](https://huggingface.co/facebook/nllb-200-3.3B) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.0031
23
+ - Bleu: 37.4644
24
+ - Rouge: {'rouge1': 0.6947858221991348, 'rouge2': 0.47528501248267296, 'rougeL': 0.643592904253675, 'rougeLsum': 0.6438336053077185}
25
+ - Chrf: {'score': 63.562323751931785, 'char_order': 6, 'word_order': 0, 'beta': 2}
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 0.0002
45
+ - train_batch_size: 8
46
+ - eval_batch_size: 8
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: cosine
50
+ - num_epochs: 5
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Rouge | Chrf |
55
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:---------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------:|
56
+ | 1.1299 | 1.0 | 9400 | 1.0473 | 35.4794 | {'rouge1': 0.6827076206592405, 'rouge2': 0.4567713815837643, 'rougeL': 0.6303031579761407, 'rougeLsum': 0.6303637744842896} | {'score': 62.07772367684291, 'char_order': 6, 'word_order': 0, 'beta': 2} |
57
+ | 1.0391 | 2.0 | 18800 | 1.0172 | 36.5551 | {'rouge1': 0.6898802619220783, 'rouge2': 0.4678566080033477, 'rougeL': 0.6376152634193879, 'rougeLsum': 0.6378050818770977} | {'score': 62.79493404105809, 'char_order': 6, 'word_order': 0, 'beta': 2} |
58
+ | 0.9772 | 3.0 | 28200 | 1.0047 | 37.1999 | {'rouge1': 0.6940761673780116, 'rouge2': 0.4729467289482048, 'rougeL': 0.6422221741064402, 'rougeLsum': 0.6423854506325695} | {'score': 63.383659426629755, 'char_order': 6, 'word_order': 0, 'beta': 2} |
59
+ | 0.9322 | 4.0 | 37600 | 1.0021 | 37.3505 | {'rouge1': 0.6946177869994575, 'rouge2': 0.47460537713160267, 'rougeL': 0.643360432984222, 'rougeLsum': 0.6434552650502989} | {'score': 63.44418689943615, 'char_order': 6, 'word_order': 0, 'beta': 2} |
60
+ | 0.9109 | 5.0 | 47000 | 1.0031 | 37.4644 | {'rouge1': 0.6947858221991348, 'rouge2': 0.47528501248267296, 'rougeL': 0.643592904253675, 'rougeLsum': 0.6438336053077185} | {'score': 63.562323751931785, 'char_order': 6, 'word_order': 0, 'beta': 2} |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - PEFT 0.7.2.dev0
66
+ - Transformers 4.36.1
67
+ - Pytorch 2.0.1+cu117
68
+ - Datasets 2.13.1
69
+ - Tokenizers 0.15.0
runs/Dec29_05-37-45_arungpu/events.out.tfevents.1703828266.arungpu.352.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:884b82750f4d664f733bdd44fb411d7456e68818a8d308b2cc8ef216a21881e2
3
- size 6902
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f50f7ee3e6d2bbffddb16a6c319f2995ade6af49eb5c8599739d2eb6a2b936f
3
+ size 7587