racheilla's picture
Training in progress epoch 37
501dc22
|
raw
history blame
3.39 kB
---
license: mit
base_model: cahya/bert-base-indonesian-522M
tags:
- generated_from_keras_callback
model-index:
- name: racheilla/bert-base-indonesian-522M-finetuned-pemilu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# racheilla/bert-base-indonesian-522M-finetuned-pemilu
This model is a fine-tuned version of [cahya/bert-base-indonesian-522M](https://huggingface.co/cahya/bert-base-indonesian-522M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.3171
- Validation Loss: 3.4078
- Epoch: 37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -950, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2847 | 3.4266 | 0 |
| 3.3000 | 3.4116 | 1 |
| 3.2702 | 3.3975 | 2 |
| 3.2675 | 3.4689 | 3 |
| 3.2982 | 3.3540 | 4 |
| 3.3109 | 3.4127 | 5 |
| 3.2698 | 3.4126 | 6 |
| 3.2852 | 3.4165 | 7 |
| 3.2977 | 3.3816 | 8 |
| 3.2749 | 3.3923 | 9 |
| 3.2777 | 3.3841 | 10 |
| 3.2555 | 3.4534 | 11 |
| 3.2940 | 3.4194 | 12 |
| 3.2860 | 3.3810 | 13 |
| 3.2585 | 3.3328 | 14 |
| 3.2979 | 3.4310 | 15 |
| 3.2844 | 3.4374 | 16 |
| 3.2961 | 3.3630 | 17 |
| 3.2729 | 3.4132 | 18 |
| 3.2775 | 3.4114 | 19 |
| 3.2561 | 3.3869 | 20 |
| 3.3089 | 3.4583 | 21 |
| 3.2839 | 3.4010 | 22 |
| 3.2863 | 3.4335 | 23 |
| 3.2347 | 3.4040 | 24 |
| 3.2691 | 3.3805 | 25 |
| 3.2779 | 3.4005 | 26 |
| 3.3175 | 3.3627 | 27 |
| 3.2853 | 3.3995 | 28 |
| 3.2787 | 3.3904 | 29 |
| 3.2739 | 3.4169 | 30 |
| 3.2976 | 3.3728 | 31 |
| 3.2474 | 3.4051 | 32 |
| 3.3152 | 3.3760 | 33 |
| 3.2939 | 3.4185 | 34 |
| 3.2955 | 3.3978 | 35 |
| 3.2823 | 3.3749 | 36 |
| 3.3171 | 3.4078 | 37 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0