Edit model card

vit-base-oxford-iiit-pets

This model is a fine-tuned version of google/vit-base-patch16-224 on the pcuenq/oxford-pets dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1798
  • Accuracy: 0.9310

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 512
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 12 2.6101 0.5223
No log 2.0 24 1.7190 0.8227
No log 3.0 36 1.0833 0.8890
No log 4.0 48 0.7011 0.9120
No log 5.0 60 0.5052 0.9242
No log 6.0 72 0.4097 0.9310
No log 7.0 84 0.3560 0.9350
No log 8.0 96 0.3237 0.9337
1.1364 9.0 108 0.3008 0.9378
1.1364 10.0 120 0.2833 0.9364
1.1364 11.0 132 0.2694 0.9391
1.1364 12.0 144 0.2586 0.9391
1.1364 13.0 156 0.2498 0.9418
1.1364 14.0 168 0.2423 0.9405
1.1364 15.0 180 0.2359 0.9405
1.1364 16.0 192 0.2303 0.9459
0.2326 17.0 204 0.2259 0.9405
0.2326 18.0 216 0.2222 0.9405
0.2326 19.0 228 0.2178 0.9432
0.2326 20.0 240 0.2146 0.9445
0.2326 21.0 252 0.2114 0.9432
0.2326 22.0 264 0.2087 0.9445
0.2326 23.0 276 0.2061 0.9432
0.2326 24.0 288 0.2040 0.9459
0.1651 25.0 300 0.2018 0.9459
0.1651 26.0 312 0.2000 0.9445
0.1651 27.0 324 0.1985 0.9459
0.1651 28.0 336 0.1968 0.9472
0.1651 29.0 348 0.1948 0.9459
0.1651 30.0 360 0.1939 0.9459
0.1651 31.0 372 0.1924 0.9459
0.1651 32.0 384 0.1915 0.9459
0.1651 33.0 396 0.1909 0.9459
0.134 34.0 408 0.1894 0.9472
0.134 35.0 420 0.1883 0.9459
0.134 36.0 432 0.1877 0.9472
0.134 37.0 444 0.1866 0.9486
0.134 38.0 456 0.1863 0.9472
0.134 39.0 468 0.1851 0.9486
0.134 40.0 480 0.1843 0.9472
0.134 41.0 492 0.1837 0.9472
0.1128 42.0 504 0.1831 0.9459
0.1128 43.0 516 0.1828 0.9472
0.1128 44.0 528 0.1822 0.9472
0.1128 45.0 540 0.1816 0.9472
0.1128 46.0 552 0.1808 0.9459
0.1128 47.0 564 0.1804 0.9459
0.1128 48.0 576 0.1802 0.9459
0.1128 49.0 588 0.1796 0.9459
0.0999 50.0 600 0.1793 0.9472
0.0999 51.0 612 0.1792 0.9486
0.0999 52.0 624 0.1787 0.9472
0.0999 53.0 636 0.1784 0.9472
0.0999 54.0 648 0.1780 0.9459
0.0999 55.0 660 0.1778 0.9445
0.0999 56.0 672 0.1772 0.9445
0.0999 57.0 684 0.1769 0.9472
0.0999 58.0 696 0.1768 0.9472
0.0894 59.0 708 0.1766 0.9472
0.0894 60.0 720 0.1763 0.9472
0.0894 61.0 732 0.1762 0.9486
0.0894 62.0 744 0.1760 0.9472
0.0894 63.0 756 0.1755 0.9459
0.0894 64.0 768 0.1752 0.9459
0.0894 65.0 780 0.1749 0.9459
0.0894 66.0 792 0.1749 0.9459
0.0828 67.0 804 0.1746 0.9472
0.0828 68.0 816 0.1745 0.9459
0.0828 69.0 828 0.1745 0.9459
0.0828 70.0 840 0.1744 0.9459
0.0828 71.0 852 0.1740 0.9459
0.0828 72.0 864 0.1741 0.9459
0.0828 73.0 876 0.1737 0.9459
0.0828 74.0 888 0.1739 0.9459
0.0778 75.0 900 0.1739 0.9459
0.0778 76.0 912 0.1737 0.9459
0.0778 77.0 924 0.1735 0.9459
0.0778 78.0 936 0.1733 0.9459
0.0778 79.0 948 0.1732 0.9459
0.0778 80.0 960 0.1732 0.9459
0.0778 81.0 972 0.1730 0.9459
0.0778 82.0 984 0.1730 0.9459
0.0778 83.0 996 0.1730 0.9459
0.0738 84.0 1008 0.1729 0.9459
0.0738 85.0 1020 0.1727 0.9459
0.0738 86.0 1032 0.1726 0.9459
0.0738 87.0 1044 0.1726 0.9459
0.0738 88.0 1056 0.1726 0.9459
0.0738 89.0 1068 0.1726 0.9459
0.0738 90.0 1080 0.1725 0.9459
0.0738 91.0 1092 0.1724 0.9459
0.0715 92.0 1104 0.1724 0.9459
0.0715 93.0 1116 0.1723 0.9459
0.0715 94.0 1128 0.1723 0.9459
0.0715 95.0 1140 0.1723 0.9459
0.0715 96.0 1152 0.1722 0.9459
0.0715 97.0 1164 0.1722 0.9459
0.0715 98.0 1176 0.1722 0.9459
0.0715 99.0 1188 0.1722 0.9459
0.0701 100.0 1200 0.1722 0.9459

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1
  • Datasets 3.0.0
  • Tokenizers 0.19.1
Downloads last month
440
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Ryukijano/vit-base-oxford-iiit-pets

Finetuned
this model