Virajtharinda commited on
Commit
956647a
1 Parent(s): 2bd8df7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -15
README.md CHANGED
@@ -3,10 +3,9 @@ model-index:
3
  - name: sinhala-bert-small
4
  results: []
5
  widget:
6
- - text: අපි තමයි [MASK] කරේ.
7
  - text: මට හෙට එන්න වෙන්නේ [MASK].
8
  - text: අපි ගෙදර [MASK].
9
- - text: සිංහල සහ [MASK] අලුත් අවුරුද්ද.
10
  license: mit
11
  language:
12
  - si
@@ -15,9 +14,9 @@ language:
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
  should probably proofread and complete it, then remove this comment. -->
17
 
18
- # sinhala-bert-small
19
 
20
- This model is pretrained on Sinhala data srources.
21
 
22
  ## Model description
23
 
@@ -26,16 +25,6 @@ This model is pretrained on Sinhala data srources.
26
  num_attention_heads = 6
27
  intermediate_size = 1024
28
 
29
- ## Intended uses & limitations
30
-
31
- More information needed
32
-
33
- ## Training and evaluation data
34
-
35
- More information needed
36
-
37
- ## Training procedure
38
-
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
@@ -56,4 +45,5 @@ The following hyperparameters were used during training:
56
  - Transformers 4.33.3
57
  - Pytorch 2.0.0
58
  - Datasets 2.14.5
59
- - Tokenizers 0.13.3
 
 
3
  - name: sinhala-bert-small
4
  results: []
5
  widget:
6
+ - text: ළමයා ගෙදර [MASK].
7
  - text: මට හෙට එන්න වෙන්නේ [MASK].
8
  - text: අපි ගෙදර [MASK].
 
9
  license: mit
10
  language:
11
  - si
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # sinhala-word-prediction
18
 
19
+ This model is pre-trained using Sinhala data sources.
20
 
21
  ## Model description
22
 
 
25
  num_attention_heads = 6
26
  intermediate_size = 1024
27
 
 
 
 
 
 
 
 
 
 
 
28
  ### Training hyperparameters
29
 
30
  The following hyperparameters were used during training:
 
45
  - Transformers 4.33.3
46
  - Pytorch 2.0.0
47
  - Datasets 2.14.5
48
+ - Tokenizers 0.13.3
49
+