google-siglip-base-patch16-224-batch32-lr0.005-standford-dogs
This model is a fine-tuned version of google/siglip-base-patch16-224 on the stanford-dogs dataset. It achieves the following results on the evaluation set:
- Loss: 0.5447
- Accuracy: 0.8324
- F1: 0.8275
- Precision: 0.8340
- Recall: 0.8285
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
4.8988 | 0.0777 | 10 | 4.4703 | 0.0632 | 0.0290 | 0.0456 | 0.0624 |
4.4323 | 0.1553 | 20 | 3.8317 | 0.1540 | 0.1033 | 0.1490 | 0.1435 |
3.8517 | 0.2330 | 30 | 2.9889 | 0.2787 | 0.2215 | 0.3131 | 0.2661 |
3.4059 | 0.3107 | 40 | 2.3481 | 0.3754 | 0.3339 | 0.4429 | 0.3702 |
2.8496 | 0.3883 | 50 | 2.3529 | 0.3649 | 0.3426 | 0.5046 | 0.3637 |
2.597 | 0.4660 | 60 | 1.6990 | 0.5350 | 0.5160 | 0.6056 | 0.5289 |
2.2791 | 0.5437 | 70 | 1.5456 | 0.5649 | 0.5345 | 0.6426 | 0.5591 |
2.056 | 0.6214 | 80 | 1.5037 | 0.5678 | 0.5557 | 0.6359 | 0.5658 |
1.9135 | 0.6990 | 90 | 1.5768 | 0.5413 | 0.5097 | 0.6302 | 0.5321 |
1.8408 | 0.7767 | 100 | 1.1497 | 0.6591 | 0.6394 | 0.6927 | 0.6535 |
1.7106 | 0.8544 | 110 | 1.2396 | 0.6365 | 0.6200 | 0.6801 | 0.6297 |
1.7172 | 0.9320 | 120 | 1.0894 | 0.6820 | 0.6715 | 0.7272 | 0.6766 |
1.6366 | 1.0097 | 130 | 1.0108 | 0.6963 | 0.6866 | 0.7387 | 0.6907 |
1.3805 | 1.0874 | 140 | 0.9943 | 0.6941 | 0.6838 | 0.7329 | 0.6878 |
1.4473 | 1.1650 | 150 | 0.9784 | 0.7034 | 0.6917 | 0.7437 | 0.6999 |
1.3215 | 1.2427 | 160 | 1.0036 | 0.6922 | 0.6767 | 0.7445 | 0.6862 |
1.3711 | 1.3204 | 170 | 0.9941 | 0.6859 | 0.6797 | 0.7414 | 0.6807 |
1.2312 | 1.3981 | 180 | 0.9691 | 0.6973 | 0.6904 | 0.7373 | 0.6970 |
1.3214 | 1.4757 | 190 | 0.9573 | 0.7106 | 0.6934 | 0.7435 | 0.7041 |
1.2569 | 1.5534 | 200 | 0.9337 | 0.7155 | 0.7062 | 0.7480 | 0.7147 |
1.2645 | 1.6311 | 210 | 0.8849 | 0.7298 | 0.7231 | 0.7586 | 0.7264 |
1.2608 | 1.7087 | 220 | 0.8403 | 0.7264 | 0.7153 | 0.7580 | 0.7232 |
1.2059 | 1.7864 | 230 | 0.8654 | 0.7293 | 0.7240 | 0.7632 | 0.7274 |
1.1956 | 1.8641 | 240 | 0.7840 | 0.7524 | 0.7435 | 0.7721 | 0.7498 |
1.1926 | 1.9417 | 250 | 0.8357 | 0.7383 | 0.7326 | 0.7800 | 0.7359 |
1.1563 | 2.0194 | 260 | 0.8298 | 0.7413 | 0.7332 | 0.7727 | 0.7359 |
0.9693 | 2.0971 | 270 | 0.7872 | 0.7512 | 0.7434 | 0.7717 | 0.7475 |
0.9372 | 2.1748 | 280 | 0.7755 | 0.7561 | 0.7502 | 0.7704 | 0.7527 |
1.0188 | 2.2524 | 290 | 0.7516 | 0.7612 | 0.7539 | 0.7832 | 0.7566 |
0.8951 | 2.3301 | 300 | 0.7819 | 0.7510 | 0.7408 | 0.7678 | 0.7457 |
0.8975 | 2.4078 | 310 | 0.8678 | 0.7298 | 0.7221 | 0.7643 | 0.7269 |
0.9194 | 2.4854 | 320 | 0.7628 | 0.7655 | 0.7555 | 0.7908 | 0.7596 |
0.8753 | 2.5631 | 330 | 0.7341 | 0.7668 | 0.7567 | 0.7876 | 0.7624 |
0.8798 | 2.6408 | 340 | 0.7475 | 0.7600 | 0.7541 | 0.7839 | 0.7589 |
0.9025 | 2.7184 | 350 | 0.7138 | 0.7694 | 0.7632 | 0.7889 | 0.7676 |
0.8974 | 2.7961 | 360 | 0.7128 | 0.7736 | 0.7668 | 0.7868 | 0.7694 |
0.8956 | 2.8738 | 370 | 0.7460 | 0.7636 | 0.7580 | 0.7855 | 0.7618 |
0.8629 | 2.9515 | 380 | 0.7315 | 0.7675 | 0.7590 | 0.7853 | 0.7616 |
0.8477 | 3.0291 | 390 | 0.7071 | 0.7738 | 0.7674 | 0.7933 | 0.7705 |
0.6569 | 3.1068 | 400 | 0.7051 | 0.7787 | 0.7681 | 0.7907 | 0.7723 |
0.691 | 3.1845 | 410 | 0.6839 | 0.7840 | 0.7768 | 0.8040 | 0.7780 |
0.6823 | 3.2621 | 420 | 0.6759 | 0.7852 | 0.7768 | 0.7935 | 0.7810 |
0.7074 | 3.3398 | 430 | 0.6757 | 0.7835 | 0.7795 | 0.8003 | 0.7812 |
0.6721 | 3.4175 | 440 | 0.6905 | 0.7889 | 0.7811 | 0.7999 | 0.7851 |
0.7367 | 3.4951 | 450 | 0.6906 | 0.7830 | 0.7750 | 0.7939 | 0.7812 |
0.6784 | 3.5728 | 460 | 0.6663 | 0.7937 | 0.7863 | 0.8039 | 0.7913 |
0.6661 | 3.6505 | 470 | 0.6949 | 0.7840 | 0.7762 | 0.7990 | 0.7804 |
0.6648 | 3.7282 | 480 | 0.6440 | 0.7971 | 0.7922 | 0.8119 | 0.7937 |
0.7052 | 3.8058 | 490 | 0.6983 | 0.7823 | 0.7748 | 0.7917 | 0.7784 |
0.7213 | 3.8835 | 500 | 0.6627 | 0.7930 | 0.7877 | 0.8059 | 0.7878 |
0.6638 | 3.9612 | 510 | 0.6402 | 0.7971 | 0.7910 | 0.8050 | 0.7929 |
0.6242 | 4.0388 | 520 | 0.6487 | 0.7983 | 0.7925 | 0.8090 | 0.7961 |
0.5233 | 4.1165 | 530 | 0.6648 | 0.7942 | 0.7859 | 0.8033 | 0.7899 |
0.5677 | 4.1942 | 540 | 0.6201 | 0.8076 | 0.8017 | 0.8141 | 0.8044 |
0.5325 | 4.2718 | 550 | 0.6332 | 0.8039 | 0.7970 | 0.8110 | 0.8018 |
0.5479 | 4.3495 | 560 | 0.6283 | 0.8083 | 0.8028 | 0.8143 | 0.8047 |
0.5485 | 4.4272 | 570 | 0.6005 | 0.8122 | 0.8090 | 0.8183 | 0.8101 |
0.5521 | 4.5049 | 580 | 0.6273 | 0.8069 | 0.8029 | 0.8169 | 0.8040 |
0.5607 | 4.5825 | 590 | 0.6291 | 0.8069 | 0.8020 | 0.8203 | 0.8027 |
0.5263 | 4.6602 | 600 | 0.6218 | 0.8076 | 0.8033 | 0.8192 | 0.8026 |
0.5798 | 4.7379 | 610 | 0.5982 | 0.8178 | 0.8134 | 0.8275 | 0.8138 |
0.5593 | 4.8155 | 620 | 0.6212 | 0.8105 | 0.8075 | 0.8209 | 0.8067 |
0.58 | 4.8932 | 630 | 0.5949 | 0.8166 | 0.8111 | 0.8250 | 0.8121 |
0.4746 | 4.9709 | 640 | 0.6007 | 0.8180 | 0.8122 | 0.8273 | 0.8122 |
0.4821 | 5.0485 | 650 | 0.5929 | 0.8183 | 0.8131 | 0.8234 | 0.8138 |
0.4221 | 5.1262 | 660 | 0.6179 | 0.8086 | 0.8017 | 0.8151 | 0.8044 |
0.4615 | 5.2039 | 670 | 0.5937 | 0.8195 | 0.8136 | 0.8228 | 0.8150 |
0.4078 | 5.2816 | 680 | 0.5970 | 0.8132 | 0.8095 | 0.8213 | 0.8085 |
0.4551 | 5.3592 | 690 | 0.5937 | 0.8132 | 0.8100 | 0.8210 | 0.8103 |
0.4211 | 5.4369 | 700 | 0.5834 | 0.8180 | 0.8140 | 0.8236 | 0.8134 |
0.4055 | 5.5146 | 710 | 0.5938 | 0.8173 | 0.8114 | 0.8239 | 0.8116 |
0.4284 | 5.5922 | 720 | 0.5988 | 0.8134 | 0.8102 | 0.8182 | 0.8103 |
0.4113 | 5.6699 | 730 | 0.6067 | 0.8132 | 0.8072 | 0.8198 | 0.8094 |
0.3689 | 5.7476 | 740 | 0.6013 | 0.8134 | 0.8081 | 0.8201 | 0.8099 |
0.3788 | 5.8252 | 750 | 0.5993 | 0.8090 | 0.8024 | 0.8146 | 0.8048 |
0.427 | 5.9029 | 760 | 0.5807 | 0.8222 | 0.8173 | 0.8262 | 0.8185 |
0.4027 | 5.9806 | 770 | 0.5829 | 0.8239 | 0.8182 | 0.8289 | 0.8191 |
0.3971 | 6.0583 | 780 | 0.5741 | 0.8243 | 0.8218 | 0.8300 | 0.8209 |
0.3543 | 6.1359 | 790 | 0.5662 | 0.8246 | 0.8206 | 0.8296 | 0.8203 |
0.3304 | 6.2136 | 800 | 0.5678 | 0.8253 | 0.8216 | 0.8323 | 0.8219 |
0.3065 | 6.2913 | 810 | 0.5797 | 0.8214 | 0.8167 | 0.8279 | 0.8175 |
0.2913 | 6.3689 | 820 | 0.5769 | 0.8212 | 0.8162 | 0.8250 | 0.8167 |
0.3447 | 6.4466 | 830 | 0.5726 | 0.8202 | 0.8165 | 0.8256 | 0.8168 |
0.3064 | 6.5243 | 840 | 0.5750 | 0.8241 | 0.8207 | 0.8310 | 0.8208 |
0.3106 | 6.6019 | 850 | 0.5631 | 0.8285 | 0.8247 | 0.8355 | 0.8246 |
0.297 | 6.6796 | 860 | 0.5591 | 0.8282 | 0.8238 | 0.8321 | 0.8244 |
0.2967 | 6.7573 | 870 | 0.5623 | 0.8243 | 0.8198 | 0.8279 | 0.8206 |
0.3157 | 6.8350 | 880 | 0.5617 | 0.8222 | 0.8177 | 0.8247 | 0.8182 |
0.3129 | 6.9126 | 890 | 0.5638 | 0.8251 | 0.8200 | 0.8283 | 0.8210 |
0.2994 | 6.9903 | 900 | 0.5578 | 0.8270 | 0.8210 | 0.8288 | 0.8233 |
0.31 | 7.0680 | 910 | 0.5498 | 0.8304 | 0.8262 | 0.8315 | 0.8267 |
0.2733 | 7.1456 | 920 | 0.5547 | 0.8280 | 0.8230 | 0.8291 | 0.8242 |
0.2496 | 7.2233 | 930 | 0.5527 | 0.8292 | 0.8255 | 0.8319 | 0.8255 |
0.2398 | 7.3010 | 940 | 0.5562 | 0.8287 | 0.8240 | 0.8305 | 0.8250 |
0.2758 | 7.3786 | 950 | 0.5509 | 0.8311 | 0.8272 | 0.8337 | 0.8279 |
0.2539 | 7.4563 | 960 | 0.5521 | 0.8297 | 0.8243 | 0.8310 | 0.8260 |
0.2891 | 7.5340 | 970 | 0.5492 | 0.8314 | 0.8266 | 0.8337 | 0.8275 |
0.239 | 7.6117 | 980 | 0.5466 | 0.8321 | 0.8271 | 0.8337 | 0.8283 |
0.23 | 7.6893 | 990 | 0.5449 | 0.8324 | 0.8275 | 0.8338 | 0.8285 |
0.2565 | 7.7670 | 1000 | 0.5447 | 0.8324 | 0.8275 | 0.8340 | 0.8285 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for amaye15/google-siglip-base-patch16-224-batch32-lr0.005-standford-dogs
Base model
google/siglip-base-patch16-224Evaluation results
- Accuracy on stanford-dogsself-reported0.832
- F1 on stanford-dogsself-reported0.828
- Precision on stanford-dogsself-reported0.834
- Recall on stanford-dogsself-reported0.828