metadata
license: apache-2.0
base_model: Visual-Attention-Network/van-tiny
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- recall
- precision
model-index:
- name: teacher-status-van-tiny-256-2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9759358288770054
- name: Recall
type: recall
value: 0.9756944444444444
- name: Precision
type: precision
value: 0.9929328621908127
teacher-status-van-tiny-256-2
This model is a fine-tuned version of Visual-Attention-Network/van-tiny on the imagefolder dataset. It achieves the following results on the evaluation set:
- Loss: 0.0916
- Accuracy: 0.9759
- F1 Score: 0.9842
- Recall: 0.9757
- Precision: 0.9929
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | Recall | Precision |
---|---|---|---|---|---|---|---|
0.6896 | 0.99 | 26 | 0.6707 | 0.7701 | 0.8701 | 1.0 | 0.7701 |
0.5438 | 1.98 | 52 | 0.4302 | 0.7701 | 0.8701 | 1.0 | 0.7701 |
0.3756 | 2.97 | 78 | 0.2762 | 0.8850 | 0.9285 | 0.9688 | 0.8914 |
0.3017 | 4.0 | 105 | 0.2002 | 0.9225 | 0.9503 | 0.9618 | 0.9390 |
0.257 | 4.99 | 131 | 0.1794 | 0.9385 | 0.9605 | 0.9722 | 0.9492 |
0.2345 | 5.98 | 157 | 0.1485 | 0.9358 | 0.9582 | 0.9549 | 0.9615 |
0.2318 | 6.97 | 183 | 0.1302 | 0.9439 | 0.9631 | 0.9514 | 0.9751 |
0.2173 | 8.0 | 210 | 0.1277 | 0.9519 | 0.9689 | 0.9722 | 0.9655 |
0.2058 | 8.99 | 236 | 0.1269 | 0.9572 | 0.9722 | 0.9722 | 0.9722 |
0.1955 | 9.98 | 262 | 0.1146 | 0.9572 | 0.9724 | 0.9792 | 0.9658 |
0.2083 | 10.97 | 288 | 0.1083 | 0.9652 | 0.9772 | 0.9688 | 0.9859 |
0.1886 | 12.0 | 315 | 0.1048 | 0.9599 | 0.9741 | 0.9792 | 0.9691 |
0.1618 | 12.99 | 341 | 0.1033 | 0.9626 | 0.9757 | 0.9757 | 0.9757 |
0.1908 | 13.98 | 367 | 0.1044 | 0.9599 | 0.9739 | 0.9722 | 0.9756 |
0.1594 | 14.97 | 393 | 0.0915 | 0.9626 | 0.9758 | 0.9792 | 0.9724 |
0.1474 | 16.0 | 420 | 0.0916 | 0.9759 | 0.9842 | 0.9757 | 0.9929 |
0.1734 | 16.99 | 446 | 0.0951 | 0.9652 | 0.9773 | 0.9722 | 0.9825 |
0.1484 | 17.98 | 472 | 0.1049 | 0.9706 | 0.9809 | 0.9792 | 0.9826 |
0.1495 | 18.97 | 498 | 0.0930 | 0.9679 | 0.9791 | 0.9757 | 0.9825 |
0.1385 | 20.0 | 525 | 0.0955 | 0.9626 | 0.9759 | 0.9826 | 0.9692 |
0.1492 | 20.99 | 551 | 0.0911 | 0.9599 | 0.9741 | 0.9792 | 0.9691 |
0.1401 | 21.98 | 577 | 0.0927 | 0.9706 | 0.9809 | 0.9792 | 0.9826 |
0.1288 | 22.97 | 603 | 0.0940 | 0.9706 | 0.9809 | 0.9792 | 0.9826 |
0.1304 | 24.0 | 630 | 0.0913 | 0.9652 | 0.9775 | 0.9826 | 0.9725 |
0.14 | 24.99 | 656 | 0.0979 | 0.9652 | 0.9776 | 0.9861 | 0.9693 |
0.1461 | 25.98 | 682 | 0.0874 | 0.9706 | 0.9810 | 0.9861 | 0.9759 |
0.1429 | 26.97 | 708 | 0.0837 | 0.9706 | 0.9808 | 0.9757 | 0.9860 |
0.1444 | 28.0 | 735 | 0.0876 | 0.9679 | 0.9792 | 0.9792 | 0.9792 |
0.145 | 28.99 | 761 | 0.0903 | 0.9706 | 0.9809 | 0.9792 | 0.9826 |
0.1445 | 29.71 | 780 | 0.0882 | 0.9679 | 0.9791 | 0.9757 | 0.9825 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0