File size: 4,004 Bytes
349d0a6 b92b8e9 1277ad7 b92b8e9 349d0a6 b2bea11 7b39fce e555b5a de68a54 6c1a7a3 3d6724b efc00a9 132a2a3 78d03c2 0c3ff34 ac5242d c76a737 fbedcca 9edec86 547fa78 678949e ec1e1da 69b71fd 6612e7b 7b59bbf cec2def e3bd92a 1023139 3166a69 42c2c2c e54ddd0 4da748e 1277ad7 b92b8e9 349d0a6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: ashhadahsan/amazon-theme-bert-base-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ashhadahsan/amazon-theme-bert-base-finetuned
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0127
- Train Accuracy: 0.9915
- Validation Loss: 0.8057
- Validation Accuracy: 0.8722
- Epoch: 29
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 1.3910 | 0.5974 | 0.8022 | 0.8008 | 0 |
| 0.2739 | 0.9554 | 0.6211 | 0.8609 | 1 |
| 0.0782 | 0.9885 | 0.5895 | 0.8609 | 2 |
| 0.0418 | 0.9913 | 0.5456 | 0.8797 | 3 |
| 0.0318 | 0.9908 | 0.5729 | 0.8797 | 4 |
| 0.0251 | 0.9906 | 0.5747 | 0.8797 | 5 |
| 0.0211 | 0.9913 | 0.5994 | 0.8797 | 6 |
| 0.0195 | 0.9906 | 0.6241 | 0.8797 | 7 |
| 0.0184 | 0.9911 | 0.6244 | 0.8797 | 8 |
| 0.0170 | 0.9904 | 0.6235 | 0.8797 | 9 |
| 0.0159 | 0.9913 | 0.6619 | 0.8797 | 10 |
| 0.0164 | 0.9913 | 0.6501 | 0.8797 | 11 |
| 0.0165 | 0.9911 | 0.6452 | 0.8835 | 12 |
| 0.0155 | 0.9908 | 0.6727 | 0.8872 | 13 |
| 0.0149 | 0.9904 | 0.6798 | 0.8835 | 14 |
| 0.0144 | 0.9906 | 0.6905 | 0.8797 | 15 |
| 0.0142 | 0.9923 | 0.7089 | 0.8797 | 16 |
| 0.0140 | 0.9923 | 0.7335 | 0.8722 | 17 |
| 0.0138 | 0.9915 | 0.7297 | 0.8722 | 18 |
| 0.0143 | 0.9908 | 0.7030 | 0.8759 | 19 |
| 0.0140 | 0.9906 | 0.7420 | 0.8759 | 20 |
| 0.0134 | 0.9915 | 0.7419 | 0.8759 | 21 |
| 0.0134 | 0.9913 | 0.7448 | 0.8835 | 22 |
| 0.0132 | 0.9915 | 0.7791 | 0.8722 | 23 |
| 0.0131 | 0.9923 | 0.7567 | 0.8797 | 24 |
| 0.0134 | 0.9915 | 0.7809 | 0.8797 | 25 |
| 0.0125 | 0.9925 | 0.7941 | 0.8797 | 26 |
| 0.0126 | 0.9923 | 0.7943 | 0.8759 | 27 |
| 0.0126 | 0.9915 | 0.8071 | 0.8797 | 28 |
| 0.0127 | 0.9915 | 0.8057 | 0.8722 | 29 |
### Framework versions
- Transformers 4.31.0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|