predict-perception-bertino-cause-object
This model is a fine-tuned version of indigo-ai/BERTino on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0766
- R2: 0.8216
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 47
Training results
Training Loss | Epoch | Step | Validation Loss | R2 |
---|---|---|---|---|
0.6807 | 1.0 | 14 | 0.4011 | 0.0652 |
0.3529 | 2.0 | 28 | 0.2304 | 0.4631 |
0.1539 | 3.0 | 42 | 0.0596 | 0.8611 |
0.0853 | 4.0 | 56 | 0.1600 | 0.6272 |
0.066 | 5.0 | 70 | 0.1596 | 0.6280 |
0.0563 | 6.0 | 84 | 0.1146 | 0.7330 |
0.0777 | 7.0 | 98 | 0.1010 | 0.7646 |
0.0299 | 8.0 | 112 | 0.0897 | 0.7910 |
0.0311 | 9.0 | 126 | 0.0832 | 0.8061 |
0.0274 | 10.0 | 140 | 0.0988 | 0.7697 |
0.0262 | 11.0 | 154 | 0.1048 | 0.7557 |
0.0204 | 12.0 | 168 | 0.0615 | 0.8566 |
0.0254 | 13.0 | 182 | 0.0742 | 0.8270 |
0.0251 | 14.0 | 196 | 0.0923 | 0.7850 |
0.0149 | 15.0 | 210 | 0.0663 | 0.8456 |
0.0141 | 16.0 | 224 | 0.0755 | 0.8241 |
0.0112 | 17.0 | 238 | 0.0905 | 0.7891 |
0.0108 | 18.0 | 252 | 0.0834 | 0.8057 |
0.0096 | 19.0 | 266 | 0.0823 | 0.8082 |
0.0073 | 20.0 | 280 | 0.0825 | 0.8078 |
0.0092 | 21.0 | 294 | 0.0869 | 0.7974 |
0.0075 | 22.0 | 308 | 0.0744 | 0.8266 |
0.0075 | 23.0 | 322 | 0.0825 | 0.8078 |
0.0062 | 24.0 | 336 | 0.0797 | 0.8144 |
0.0065 | 25.0 | 350 | 0.0793 | 0.8152 |
0.007 | 26.0 | 364 | 0.0840 | 0.8043 |
0.0067 | 27.0 | 378 | 0.0964 | 0.7753 |
0.0064 | 28.0 | 392 | 0.0869 | 0.7976 |
0.0063 | 29.0 | 406 | 0.0766 | 0.8215 |
0.0057 | 30.0 | 420 | 0.0764 | 0.8219 |
0.0057 | 31.0 | 434 | 0.0796 | 0.8145 |
0.0054 | 32.0 | 448 | 0.0853 | 0.8012 |
0.0044 | 33.0 | 462 | 0.0750 | 0.8253 |
0.0072 | 34.0 | 476 | 0.0782 | 0.8179 |
0.006 | 35.0 | 490 | 0.0867 | 0.7979 |
0.0054 | 36.0 | 504 | 0.0819 | 0.8092 |
0.0047 | 37.0 | 518 | 0.0839 | 0.8045 |
0.0043 | 38.0 | 532 | 0.0764 | 0.8221 |
0.0039 | 39.0 | 546 | 0.0728 | 0.8303 |
0.0041 | 40.0 | 560 | 0.0755 | 0.8241 |
0.0038 | 41.0 | 574 | 0.0729 | 0.8301 |
0.0034 | 42.0 | 588 | 0.0781 | 0.8180 |
0.0038 | 43.0 | 602 | 0.0762 | 0.8224 |
0.0032 | 44.0 | 616 | 0.0777 | 0.8189 |
0.0035 | 45.0 | 630 | 0.0776 | 0.8191 |
0.0037 | 46.0 | 644 | 0.0765 | 0.8217 |
0.0036 | 47.0 | 658 | 0.0766 | 0.8216 |
Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.