Ransaka commited on
Commit
358b198
1 Parent(s): af02c02

End of training

Browse files
README.md CHANGED
@@ -14,8 +14,13 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [Ransaka/sinhala-ocr-model](https://huggingface.co/Ransaka/sinhala-ocr-model) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 6.2306
18
- - Cer: 0.5161
 
 
 
 
 
19
 
20
  ## Model description
21
 
@@ -34,35 +39,17 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 8e-05
38
  - train_batch_size: 4
39
  - eval_batch_size: 4
40
  - seed: 42
41
- - gradient_accumulation_steps: 2
42
- - total_train_batch_size: 8
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - training_steps: 6000
46
  - mixed_precision_training: Native AMP
47
 
48
- ### Training results
49
-
50
- | Training Loss | Epoch | Step | Validation Loss | Cer |
51
- |:-------------:|:-----:|:----:|:---------------:|:------:|
52
- | 4.543 | 3.27 | 500 | 6.2682 | 0.7086 |
53
- | 2.6146 | 6.54 | 1000 | 5.8348 | 0.6390 |
54
- | 1.8448 | 9.8 | 1500 | 5.8076 | 0.6166 |
55
- | 1.3887 | 13.07 | 2000 | 6.0250 | 0.6072 |
56
- | 1.0271 | 16.34 | 2500 | 5.9971 | 0.5707 |
57
- | 0.8891 | 19.61 | 3000 | 5.9803 | 0.5630 |
58
- | 0.6548 | 22.88 | 3500 | 6.0045 | 0.5542 |
59
- | 0.4939 | 26.14 | 4000 | 6.0223 | 0.5354 |
60
- | 0.322 | 29.41 | 4500 | 6.1360 | 0.5233 |
61
- | 0.2459 | 32.68 | 5000 | 6.1166 | 0.5220 |
62
- | 0.123 | 35.95 | 5500 | 6.1740 | 0.5162 |
63
- | 0.1575 | 39.22 | 6000 | 6.2306 | 0.5161 |
64
-
65
-
66
  ### Framework versions
67
 
68
  - Transformers 4.35.2
 
14
 
15
  This model is a fine-tuned version of [Ransaka/sinhala-ocr-model](https://huggingface.co/Ransaka/sinhala-ocr-model) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - eval_loss: 4.8494
18
+ - eval_cer: 0.4227
19
+ - eval_runtime: 229.7041
20
+ - eval_samples_per_second: 1.776
21
+ - eval_steps_per_second: 0.444
22
+ - epoch: 5.23
23
+ - step: 400
24
 
25
  ## Model description
26
 
 
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
+ - learning_rate: 1e-05
43
  - train_batch_size: 4
44
  - eval_batch_size: 4
45
  - seed: 42
46
+ - gradient_accumulation_steps: 4
47
+ - total_train_batch_size: 16
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - training_steps: 6000
51
  - mixed_precision_training: Native AMP
52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ### Framework versions
54
 
55
  - Transformers 4.35.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c5521598bb48b042a156da975f3527fb6c1809df90f0ff3b672c7097145988fe
3
  size 1260933520
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9d409c533ee4e3ed6cb94d3e71aefb587b699a7efbeaaf08ee089a86133d7c1
3
  size 1260933520
runs/Jan05_03-20-25_beb7e32edffa/events.out.tfevents.1704424826.beb7e32edffa.42.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:90b572a691a945cf45d2449d73a58d8d1fd345c338a7adfe99247ef302874e24
3
- size 11305
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d22e0022c92a123a7d92c81964f58f9a7d542c32a45909900aafc488a5135c4a
3
+ size 12251