Edit model card

predict-perception-bertino-focus-assassin

This model is a fine-tuned version of indigo-ai/BERTino on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3409
  • R2: 0.3205

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 47

Training results

Training Loss Epoch Step Validation Loss R2
0.5573 1.0 14 0.4856 0.0321
0.1739 2.0 28 0.4735 0.0562
0.0813 3.0 42 0.3416 0.3191
0.0764 4.0 56 0.3613 0.2799
0.0516 5.0 70 0.3264 0.3495
0.0459 6.0 84 0.4193 0.1643
0.0414 7.0 98 0.3502 0.3019
0.028 8.0 112 0.3361 0.3301
0.0281 9.0 126 0.3610 0.2804
0.027 10.0 140 0.3523 0.2978
0.0216 11.0 154 0.3440 0.3143
0.0181 12.0 168 0.3506 0.3012
0.013 13.0 182 0.3299 0.3424
0.0116 14.0 196 0.3611 0.2803
0.0118 15.0 210 0.3505 0.3013
0.0139 16.0 224 0.3529 0.2967
0.0099 17.0 238 0.3536 0.2952
0.0096 18.0 252 0.3542 0.2941
0.0107 19.0 266 0.3770 0.2486
0.0088 20.0 280 0.3467 0.3091
0.0065 21.0 294 0.3327 0.3369
0.0073 22.0 308 0.3479 0.3066
0.0062 23.0 322 0.3566 0.2893
0.0063 24.0 336 0.3503 0.3019
0.0057 25.0 350 0.3371 0.3282
0.0049 26.0 364 0.3334 0.3355
0.0045 27.0 378 0.3399 0.3225
0.0049 28.0 392 0.3379 0.3266
0.0049 29.0 406 0.3377 0.3268
0.0055 30.0 420 0.3357 0.3309
0.005 31.0 434 0.3394 0.3235
0.0046 32.0 448 0.3432 0.3159
0.0048 33.0 462 0.3427 0.3169
0.0041 34.0 476 0.3450 0.3123
0.0041 35.0 490 0.3436 0.3151
0.0051 36.0 504 0.3394 0.3234
0.0037 37.0 518 0.3370 0.3283
0.004 38.0 532 0.3370 0.3284
0.0033 39.0 546 0.3339 0.3344
0.0034 40.0 560 0.3335 0.3352
0.003 41.0 574 0.3373 0.3276
0.0035 42.0 588 0.3380 0.3264
0.0032 43.0 602 0.3382 0.3259
0.0034 44.0 616 0.3432 0.3158
0.003 45.0 630 0.3421 0.3181
0.0027 46.0 644 0.3410 0.3203
0.0037 47.0 658 0.3409 0.3205

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.