Ransaka commited on
Commit
ed30ec0
·
1 Parent(s): 117c35c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -38
README.md CHANGED
@@ -4,36 +4,18 @@ model-index:
4
  - name: sinhala-ocr-model-v2
5
  results: []
6
  pipeline_tag: image-to-text
7
- widget:
8
- - src: >-
9
- https://datasets-server.huggingface.co/assets/Ransaka/sinhala_synthetic_ocr/--/bf7c8a455b564cd73fe035031e19a5f39babb73b/--/default/train/0/image/image.jpg
10
- example_title: Synthetic 1
11
- - src: >-
12
- https://datasets-server.huggingface.co/assets/Ransaka/sinhala_synthetic_ocr/--/bf7c8a455b564cd73fe035031e19a5f39babb73b/--/default/train/1/image/image.jpg
13
- example_title: Synthetic 2
14
- - src: >-
15
- https://datasets-server.huggingface.co/assets/Ransaka/sinhala_synthetic_ocr/--/bf7c8a455b564cd73fe035031e19a5f39babb73b/--/default/train/9/image/image.jpg
16
- example_title: Synthetic 3
17
  ---
18
 
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
  should probably proofread and complete it, then remove this comment. -->
21
 
22
- # sinhala-ocr-model-v2
23
 
24
- This model is a fine-tuned version of [Ransaka/sinhala-ocr-model](https://huggingface.co/Ransaka/sinhala-ocr-model) on an unknown dataset.
25
- It achieves the following results on the evaluation set:
26
- - eval_loss: 4.8494
27
- - eval_cer: 0.4227
28
- - eval_runtime: 229.7041
29
- - eval_samples_per_second: 1.776
30
- - eval_steps_per_second: 0.444
31
- - epoch: 5.23
32
- - step: 400
33
 
34
  ## Model description
35
 
36
- More information needed
37
 
38
  ## Intended uses & limitations
39
 
@@ -43,25 +25,12 @@ More information needed
43
 
44
  More information needed
45
 
46
- ## Training procedure
47
-
48
- ### Training hyperparameters
49
-
50
- The following hyperparameters were used during training:
51
- - learning_rate: 1e-05
52
- - train_batch_size: 4
53
- - eval_batch_size: 4
54
- - seed: 42
55
- - gradient_accumulation_steps: 4
56
- - total_train_batch_size: 16
57
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
- - lr_scheduler_type: linear
59
- - training_steps: 6000
60
- - mixed_precision_training: Native AMP
61
-
62
  ### Framework versions
63
 
64
  - Transformers 4.35.2
65
  - Pytorch 2.0.0
66
  - Datasets 2.16.0
67
- - Tokenizers 0.15.0
 
 
 
 
4
  - name: sinhala-ocr-model-v2
5
  results: []
6
  pipeline_tag: image-to-text
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
  should probably proofread and complete it, then remove this comment. -->
11
 
12
+ # TrOCR-Sinhala
13
 
14
+ See training metrics tab for performance details.
 
 
 
 
 
 
 
 
15
 
16
  ## Model description
17
 
18
+ This model is finetuned version of Microsoft [TrOCR Printed](https://huggingface.co/microsoft/trocr-base-printed)
19
 
20
  ## Intended uses & limitations
21
 
 
25
 
26
  More information needed
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ### Framework versions
29
 
30
  - Transformers 4.35.2
31
  - Pytorch 2.0.0
32
  - Datasets 2.16.0
33
+ - Tokenizers 0.15.0
34
+
35
+ ## Examples
36
+ <img src='https://datasets-server.huggingface.co/assets/Ransaka/sinhala_synthetic_ocr/--/bf7c8a455b564cd73fe035031e19a5f39babb73b/--/default/train/0/image/image.jpg'>