Edit model card

tsf-gs-rot-flip-wtoken-DRPT-r128-f150-8.8-h768-i3072-p32-b8-e60

This model is a fine-tuned version of facebook/timesformer-base-finetuned-k400 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7740
  • Accuracy: 0.8610

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 6480
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1114 0.0168 109 1.0984 0.3797
1.1049 1.0168 218 1.1141 0.3262
1.1056 2.0168 327 1.1710 0.3262
1.1154 3.0168 436 1.1256 0.3369
1.1056 4.0168 545 1.0962 0.3529
1.1569 5.0168 654 1.1136 0.3262
1.036 6.0168 763 1.0213 0.4652
1.0506 7.0168 872 1.0488 0.4278
1.042 8.0168 981 0.9366 0.5561
0.8999 9.0168 1090 0.8098 0.6310
0.9432 10.0168 1199 0.9513 0.5936
0.7898 11.0168 1308 0.5836 0.8021
0.7952 12.0168 1417 0.5680 0.7647
0.6641 13.0168 1526 0.6147 0.7861
0.6901 14.0168 1635 0.5688 0.7754
0.4637 15.0168 1744 0.5834 0.7914
0.5898 16.0168 1853 0.6636 0.7326
0.7036 17.0168 1962 0.7142 0.7433
0.3946 18.0168 2071 0.4866 0.8342
0.5379 19.0168 2180 0.6641 0.7701
0.5869 20.0168 2289 0.4817 0.8289
0.4564 21.0168 2398 0.4909 0.8396
0.419 22.0168 2507 0.5006 0.8235
0.4989 23.0168 2616 0.5648 0.8182
0.2701 24.0168 2725 0.5963 0.8342
0.5191 25.0168 2834 0.5766 0.7914
0.5088 26.0168 2943 0.4679 0.8610
0.3828 27.0168 3052 0.5231 0.8503
0.4228 28.0168 3161 0.6142 0.8235
0.5544 29.0168 3270 0.6508 0.8289
0.3595 30.0168 3379 0.6572 0.7914
0.3117 31.0168 3488 0.5587 0.8342
0.3324 32.0168 3597 0.5021 0.8610
0.3282 33.0168 3706 0.7642 0.8235
0.427 34.0168 3815 0.5739 0.8663
0.152 35.0168 3924 0.6957 0.8610
0.426 36.0168 4033 0.6705 0.8342
0.2803 37.0168 4142 0.5854 0.8449
0.3198 38.0168 4251 0.5280 0.8449
0.4348 39.0168 4360 0.7755 0.8128
0.1915 40.0168 4469 0.6813 0.8503
0.0793 41.0168 4578 0.7260 0.8503
0.3902 42.0168 4687 0.6581 0.8663
0.3552 43.0168 4796 0.5732 0.8610
0.3091 44.0168 4905 0.7510 0.8396
0.1103 45.0168 5014 0.7604 0.8503
0.3362 46.0168 5123 0.7156 0.8556
0.1935 47.0168 5232 0.6882 0.8503
0.0889 48.0168 5341 0.7639 0.8663
0.2156 49.0168 5450 0.8250 0.8610
0.0949 50.0168 5559 0.8256 0.8556
0.1735 51.0168 5668 0.6839 0.8770
0.1612 52.0168 5777 0.9082 0.8663
0.0556 53.0168 5886 0.7659 0.8770
0.0997 54.0168 5995 0.8025 0.8824
0.1648 55.0168 6104 0.8449 0.8770
0.1443 56.0168 6213 0.7801 0.8770
0.0405 57.0168 6322 0.8647 0.8663
0.3323 58.0168 6431 0.8411 0.8503
0.1137 59.0076 6480 0.7740 0.8610

Framework versions

  • Transformers 4.41.2
  • Pytorch 1.13.0+cu117
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
81.3M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Temo27Anas/tsf-gs-rot-flip-wtoken-DRPT-r128-f150-8.8-h768-i3072-p32-b8-e60

Finetuned
(41)
this model