--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - precision - recall - f1 model-index: - name: vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.9075 - name: Precision type: precision value: 0.9136222146251665 - name: Recall type: recall value: 0.9075 - name: F1 type: f1 value: 0.904614447173649 --- # vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4946 - Accuracy: 0.9075 - Precision: 0.9136 - Recall: 0.9075 - F1: 0.9046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.2895 | 0.6667 | 100 | 0.5586 | 0.795 | 0.8452 | 0.795 | 0.7997 | | 0.0848 | 1.3333 | 200 | 0.8609 | 0.7975 | 0.8401 | 0.7975 | 0.7883 | | 0.0782 | 2.0 | 300 | 0.7032 | 0.81 | 0.8414 | 0.81 | 0.8116 | | 0.0158 | 2.6667 | 400 | 0.7198 | 0.8342 | 0.8570 | 0.8342 | 0.8336 | | 0.0327 | 3.3333 | 500 | 0.7624 | 0.8458 | 0.8484 | 0.8458 | 0.8448 | | 0.0044 | 4.0 | 600 | 0.6172 | 0.8792 | 0.8926 | 0.8792 | 0.8769 | | 0.0032 | 4.6667 | 700 | 0.7772 | 0.8517 | 0.8589 | 0.8517 | 0.8496 | | 0.0026 | 5.3333 | 800 | 0.8897 | 0.8375 | 0.8478 | 0.8375 | 0.8351 | | 0.0033 | 6.0 | 900 | 0.4946 | 0.9075 | 0.9136 | 0.9075 | 0.9046 | | 0.0019 | 6.6667 | 1000 | 0.6971 | 0.8725 | 0.8727 | 0.8725 | 0.8716 | | 0.0016 | 7.3333 | 1100 | 0.7355 | 0.8692 | 0.8711 | 0.8692 | 0.8685 | | 0.0136 | 8.0 | 1200 | 0.9004 | 0.8675 | 0.8900 | 0.8675 | 0.8613 | | 0.0013 | 8.6667 | 1300 | 0.7646 | 0.875 | 0.8837 | 0.875 | 0.8715 | | 0.0011 | 9.3333 | 1400 | 0.7833 | 0.875 | 0.8786 | 0.875 | 0.8729 | | 0.0009 | 10.0 | 1500 | 0.7968 | 0.8767 | 0.8800 | 0.8767 | 0.8747 | | 0.0009 | 10.6667 | 1600 | 0.8085 | 0.8758 | 0.8790 | 0.8758 | 0.8738 | | 0.0008 | 11.3333 | 1700 | 0.8175 | 0.8758 | 0.8790 | 0.8758 | 0.8738 | | 0.0008 | 12.0 | 1800 | 0.8242 | 0.8767 | 0.8801 | 0.8767 | 0.8746 | | 0.0007 | 12.6667 | 1900 | 0.8292 | 0.8767 | 0.8801 | 0.8767 | 0.8746 | | 0.0007 | 13.3333 | 2000 | 0.8335 | 0.8775 | 0.8812 | 0.8775 | 0.8754 | | 0.0007 | 14.0 | 2100 | 0.8363 | 0.8775 | 0.8812 | 0.8775 | 0.8754 | | 0.0007 | 14.6667 | 2200 | 0.8376 | 0.8775 | 0.8812 | 0.8775 | 0.8754 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.6.0+cu126 - Datasets 3.2.0 - Tokenizers 0.21.0