File size: 3,471 Bytes
a482ee1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11d90d3
 
 
a482ee1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7630ee6
cfae0ca
b3d45ff
c30c416
09fd1b2
4b42ed9
3c42fde
c133292
2ec8837
08a00d0
c7b090e
269d8b5
091b068
54ae5e3
67cab23
4c5bf33
24294a2
25d44c0
05283f1
cdd7bb9
89e3cff
0991e47
4fad801
679b834
951414a
96e08b5
c38a203
09527ca
37c9df4
d063e00
9c8c0bd
cda078b
bc7cda9
8f21467
192a853
770a043
d1fa814
501dc22
9e2421c
11d90d3
a482ee1
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
license: mit
base_model: cahya/bert-base-indonesian-522M
tags:
- generated_from_keras_callback
model-index:
- name: racheilla/bert-base-indonesian-522M-finetuned-pemilu
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# racheilla/bert-base-indonesian-522M-finetuned-pemilu

This model is a fine-tuned version of [cahya/bert-base-indonesian-522M](https://huggingface.co/cahya/bert-base-indonesian-522M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2573
- Validation Loss: 3.4101
- Epoch: 39

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16

### Training results

| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2847     | 3.4266          | 0     |
| 3.3000     | 3.4116          | 1     |
| 3.2702     | 3.3975          | 2     |
| 3.2675     | 3.4689          | 3     |
| 3.2982     | 3.3540          | 4     |
| 3.3109     | 3.4127          | 5     |
| 3.2698     | 3.4126          | 6     |
| 3.2852     | 3.4165          | 7     |
| 3.2977     | 3.3816          | 8     |
| 3.2749     | 3.3923          | 9     |
| 3.2777     | 3.3841          | 10    |
| 3.2555     | 3.4534          | 11    |
| 3.2940     | 3.4194          | 12    |
| 3.2860     | 3.3810          | 13    |
| 3.2585     | 3.3328          | 14    |
| 3.2979     | 3.4310          | 15    |
| 3.2844     | 3.4374          | 16    |
| 3.2961     | 3.3630          | 17    |
| 3.2729     | 3.4132          | 18    |
| 3.2775     | 3.4114          | 19    |
| 3.2561     | 3.3869          | 20    |
| 3.3089     | 3.4583          | 21    |
| 3.2839     | 3.4010          | 22    |
| 3.2863     | 3.4335          | 23    |
| 3.2347     | 3.4040          | 24    |
| 3.2691     | 3.3805          | 25    |
| 3.2779     | 3.4005          | 26    |
| 3.3175     | 3.3627          | 27    |
| 3.2853     | 3.3995          | 28    |
| 3.2787     | 3.3904          | 29    |
| 3.2739     | 3.4169          | 30    |
| 3.2976     | 3.3728          | 31    |
| 3.2474     | 3.4051          | 32    |
| 3.3152     | 3.3760          | 33    |
| 3.2939     | 3.4185          | 34    |
| 3.2955     | 3.3978          | 35    |
| 3.2823     | 3.3749          | 36    |
| 3.3171     | 3.4078          | 37    |
| 3.2513     | 3.4022          | 38    |
| 3.2573     | 3.4101          | 39    |


### Framework versions

- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0