Edit model card

swinv2-tiny-patch4-window8-256-dmae-humeda-DA-V2

This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2824
  • Accuracy: 0.6346

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 42

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.8696 5 1.5656 0.3462
No log 1.8261 10 1.5187 0.3654
1.5752 2.7826 15 1.4276 0.4615
1.5752 3.9130 21 1.2763 0.4615
1.3428 4.8696 26 1.1021 0.5385
1.3428 5.8261 31 1.0576 0.5385
1.0139 6.7826 36 1.0284 0.5769
1.0139 7.9130 42 1.0838 0.5577
1.0139 8.8696 47 1.0973 0.5192
0.7923 9.8261 52 1.0880 0.5962
0.7923 10.7826 57 1.1609 0.5577
0.7424 11.9130 63 1.0270 0.6731
0.7424 12.8696 68 1.0543 0.6154
0.6486 13.8261 73 0.9941 0.6154
0.6486 14.7826 78 1.2237 0.5577
0.595 15.9130 84 1.0447 0.6154
0.595 16.8696 89 1.2137 0.6538
0.595 17.8261 94 1.0769 0.6538
0.5128 18.7826 99 1.1559 0.6346
0.5128 19.9130 105 1.1879 0.6346
0.4968 20.8696 110 1.1066 0.6154
0.4968 21.8261 115 1.0849 0.6154
0.4385 22.7826 120 1.1078 0.6731
0.4385 23.9130 126 1.2649 0.6538
0.4385 24.8696 131 1.1754 0.6346
0.437 25.8261 136 1.1540 0.6346
0.437 26.7826 141 1.3327 0.6154
0.3852 27.9130 147 1.2256 0.6538
0.3852 28.8696 152 1.2546 0.6154
0.3813 29.8261 157 1.3192 0.6154
0.3813 30.7826 162 1.2768 0.6154
0.3614 31.9130 168 1.2230 0.6154
0.3614 32.8696 173 1.2120 0.6154
0.3614 33.8261 178 1.3150 0.6538
0.3339 34.7826 183 1.3258 0.6538
0.3339 35.9130 189 1.2851 0.6154
0.3294 36.8696 194 1.2712 0.6154
0.3294 37.8261 199 1.2630 0.6154
0.3269 38.7826 204 1.2713 0.6346
0.3269 39.9130 210 1.2824 0.6346

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
8
Safetensors
Model size
27.6M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Augusto777/swinv2-tiny-patch4-window8-256-dmae-humeda-DA-V2

Finetuned
(49)
this model