Ransaka commited on
Commit
fbc0693
1 Parent(s): ee5d78e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -11
README.md CHANGED
@@ -1,22 +1,29 @@
1
  ---
2
- base_model: Ransaka/sinhala-bert-medium-v1
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
- - name: sinhala-bert-medium-v2
7
  results: []
 
 
 
 
 
 
 
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
 
13
- # sinhala-bert-medium-v2
14
-
15
- This model is a fine-tuned version of [Ransaka/sinhala-bert-medium-v1](https://huggingface.co/Ransaka/sinhala-bert-medium-v1) on an unknown dataset.
16
 
17
  ## Model description
18
 
19
- More information needed
 
 
 
20
 
21
  ## Intended uses & limitations
22
 
@@ -31,13 +38,13 @@ More information needed
31
  ### Training hyperparameters
32
 
33
  The following hyperparameters were used during training:
34
- - learning_rate: 0.0001
35
- - train_batch_size: 128
36
  - eval_batch_size: 8
37
  - seed: 42
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: linear
40
- - num_epochs: 2
41
 
42
  ### Training results
43
 
 
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
5
+ - name: sinhala-bert-medium
6
  results: []
7
+ widget:
8
+ - text: "අපි තමයි [MASK] කරේ."
9
+ - text: "මට හෙට එන්න වෙන්නේ [MASK]."
10
+ - text: "අපි ගෙදර [MASK]."
11
+ - text: 'සිංහල සහ [MASK] අලුත් අවුරුද්ද.'
12
+ license: mit
13
+ language:
14
+ - si
15
  ---
16
 
17
+ # sinhala-bert-medium
 
18
 
19
+ This model is pretrained on Sinhala data srources.
 
 
20
 
21
  ## Model description
22
 
23
+ hidden_size = 786
24
+ num_hidden_layers = 6
25
+ num_attention_heads = 6
26
+ intermediate_size = 1024
27
 
28
  ## Intended uses & limitations
29
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 5e-05
42
+ - train_batch_size: 64
43
  - eval_batch_size: 8
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - num_epochs: 10
48
 
49
  ### Training results
50