hkivancoral's picture
End of training
ee20456
metadata
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_base_sgd_0001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.13333333333333333

hushem_1x_deit_base_sgd_0001_fold1

This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5214
  • Accuracy: 0.1333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.5231 0.1333
1.415 2.0 12 1.5230 0.1333
1.415 3.0 18 1.5229 0.1333
1.4163 4.0 24 1.5228 0.1333
1.412 5.0 30 1.5228 0.1333
1.412 6.0 36 1.5227 0.1333
1.4139 7.0 42 1.5226 0.1333
1.4139 8.0 48 1.5225 0.1333
1.425 9.0 54 1.5225 0.1333
1.4037 10.0 60 1.5224 0.1333
1.4037 11.0 66 1.5223 0.1333
1.4157 12.0 72 1.5223 0.1333
1.4157 13.0 78 1.5222 0.1333
1.4043 14.0 84 1.5221 0.1333
1.4083 15.0 90 1.5221 0.1333
1.4083 16.0 96 1.5220 0.1333
1.4137 17.0 102 1.5220 0.1333
1.4137 18.0 108 1.5219 0.1333
1.4171 19.0 114 1.5219 0.1333
1.4171 20.0 120 1.5218 0.1333
1.4171 21.0 126 1.5218 0.1333
1.415 22.0 132 1.5218 0.1333
1.415 23.0 138 1.5217 0.1333
1.4319 24.0 144 1.5217 0.1333
1.416 25.0 150 1.5217 0.1333
1.416 26.0 156 1.5216 0.1333
1.4082 27.0 162 1.5216 0.1333
1.4082 28.0 168 1.5216 0.1333
1.413 29.0 174 1.5216 0.1333
1.4097 30.0 180 1.5215 0.1333
1.4097 31.0 186 1.5215 0.1333
1.4098 32.0 192 1.5215 0.1333
1.4098 33.0 198 1.5215 0.1333
1.4176 34.0 204 1.5215 0.1333
1.4032 35.0 210 1.5214 0.1333
1.4032 36.0 216 1.5214 0.1333
1.4087 37.0 222 1.5214 0.1333
1.4087 38.0 228 1.5214 0.1333
1.4343 39.0 234 1.5214 0.1333
1.4129 40.0 240 1.5214 0.1333
1.4129 41.0 246 1.5214 0.1333
1.4161 42.0 252 1.5214 0.1333
1.4161 43.0 258 1.5214 0.1333
1.3976 44.0 264 1.5214 0.1333
1.4177 45.0 270 1.5214 0.1333
1.4177 46.0 276 1.5214 0.1333
1.4164 47.0 282 1.5214 0.1333
1.4164 48.0 288 1.5214 0.1333
1.4144 49.0 294 1.5214 0.1333
1.4018 50.0 300 1.5214 0.1333

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.7
  • Tokenizers 0.15.0