license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-dunham-carbonate-classifier
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8888888888888888
vit-dunham-carbonate-classifier
Model description
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the Lokier & Al Junaibi (2016) data S1.
The model captures the expertise of 177 volunteers from 33 countries with 3,270 years of academic & industry experience in classifying 14 carbonate thin section samples by using the classical Dunham (1962) carbonate classification.
(Source)
In the original paper, the authors intended to objectively analyze whether these volunteers have the same standards in applying Dunham classification.
Intended uses & limitations
- Input: Carbonate thin section image, can be either parallel-polarized (PPL) or cross-polarized (XPL)
- Output: Dunham classification (Mudstone/Wackestone/Packstone/Grainstone/Boundstone/Crystalline) and the probability value
- Limitation: The original dataset is missing Boundstone sample, hence it cannot classify a Boundstone.
Sample image source: Grainstone - Wikipedia
Training and evaluation data
Source: Lokier & Al Junaibi (2016), Data S1
The data consists of 14 samples. Each samples has 3 magnifications (x2, x4, and x10) and taken in PPL and XPL. Hence, there are 14 samples * 3 magnifications * 2 polarizations = 84 images in the training dataset.
Classification for each sample is taken from the most popular respondent's response in Table 7.
- Sample 1: Packstone
- Sample 2: Grainstone
- Sample 3: Wackestone
- Sample 4: Packstone
- Sample 5: Wackestone
- Sample 6: Packstone
- Sample 7: Packstone
- Sample 8: Mudstone
- Sample 9: Crystalline
- Sample 10: Grainstone
- Sample 11: Wackestone
- Sample 12: Grainstone
- Sample 13: Grainstone
- Sample 14: Mudstone
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
1.5764 | 1.0 | 5 | 1.5329 | 0.4444 |
1.3991 | 2.0 | 10 | 1.4253 | 0.5556 |
1.2792 | 3.0 | 15 | 1.2851 | 0.7778 |
1.0119 | 4.0 | 20 | 1.1625 | 0.8889 |
0.9916 | 5.0 | 25 | 1.0471 | 0.8889 |
0.9202 | 6.0 | 30 | 0.9836 | 0.7778 |
0.6994 | 7.0 | 35 | 0.8649 | 0.8889 |
0.526 | 8.0 | 40 | 0.7110 | 1.0 |
0.5383 | 9.0 | 45 | 0.6127 | 1.0 |
0.5128 | 10.0 | 50 | 0.5337 | 1.0 |
0.4312 | 11.0 | 55 | 0.4887 | 1.0 |
0.3827 | 12.0 | 60 | 0.4365 | 1.0 |
0.3452 | 13.0 | 65 | 0.3891 | 1.0 |
0.3164 | 14.0 | 70 | 0.3677 | 1.0 |
0.2899 | 15.0 | 75 | 0.3555 | 1.0 |
0.2878 | 16.0 | 80 | 0.3197 | 1.0 |
0.2884 | 17.0 | 85 | 0.3056 | 1.0 |
0.2633 | 18.0 | 90 | 0.3107 | 1.0 |
0.2669 | 19.0 | 95 | 0.3164 | 1.0 |
0.2465 | 20.0 | 100 | 0.2949 | 1.0 |
Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3