--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy - f1 - precision - recall model-index: - name: vit-base-patch16-224-in21k-laneclassifierasphaltconcrete-detectorVITmain50epochs results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: accuracy: 1.0 - name: F1 type: f1 value: f1: 1.0 - name: Precision type: precision value: precision: 1.0 - name: Recall type: recall value: recall: 1.0 --- # vit-base-patch16-224-in21k-laneclassifierasphaltconcrete-detectorVITmain50epochs This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0004 - Accuracy: {'accuracy': 1.0} - F1: {'f1': 1.0} - Precision: {'precision': 1.0} - Recall: {'recall': 1.0} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:--------------------------------:|:--------------------------:|:---------------------------------:|:------------------------------:| | 0.0576 | 0.9933 | 111 | 0.0139 | {'accuracy': 0.9977628635346756} | {'f1': 0.9966709613995368} | {'precision': 0.9985795454545454} | {'recall': 0.9947916666666667} | | 0.0365 | 1.9955 | 223 | 0.0012 | {'accuracy': 1.0} | {'f1': 1.0} | {'precision': 1.0} | {'recall': 1.0} | | 0.0009 | 2.9978 | 335 | 0.0008 | {'accuracy': 1.0} | {'f1': 1.0} | {'precision': 1.0} | {'recall': 1.0} | | 0.0007 | 4.0 | 447 | 0.0007 | {'accuracy': 1.0} | {'f1': 1.0} | {'precision': 1.0} | {'recall': 1.0} | | 0.0006 | 4.9933 | 558 | 0.0005 | {'accuracy': 1.0} | {'f1': 1.0} | {'precision': 1.0} | {'recall': 1.0} | | 0.0005 | 5.9955 | 670 | 0.0005 | {'accuracy': 1.0} | {'f1': 1.0} | {'precision': 1.0} | {'recall': 1.0} | | 0.0005 | 6.9978 | 782 | 0.0004 | {'accuracy': 1.0} | {'f1': 1.0} | {'precision': 1.0} | {'recall': 1.0} | | 0.0005 | 7.9463 | 888 | 0.0004 | {'accuracy': 1.0} | {'f1': 1.0} | {'precision': 1.0} | {'recall': 1.0} | ### Framework versions - Transformers 4.43.3 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1