Ivanrs commited on
Commit
0b7bcb7
·
verified ·
1 Parent(s): b9a458e

vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR

Browse files
README.md CHANGED
@@ -26,31 +26,30 @@ model-index:
26
  metrics:
27
  - name: Accuracy
28
  type: accuracy
29
- value: 0.8016666666666666
30
  - name: Precision
31
  type: precision
32
- value: 0.8374648240970316
33
  - name: Recall
34
  type: recall
35
- value: 0.8016666666666666
36
  - name: F1
37
  type: f1
38
- value: 0.8071519788135743
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
42
  should probably proofread and complete it, then remove this comment. -->
43
 
44
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/cv-inside/vit-base-kidney-stone/runs/1ngojzaz)
45
  # vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR
46
 
47
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
48
  It achieves the following results on the evaluation set:
49
- - Loss: 0.5368
50
- - Accuracy: 0.8017
51
- - Precision: 0.8375
52
- - Recall: 0.8017
53
- - F1: 0.8072
54
 
55
  ## Model description
56
 
@@ -75,58 +74,35 @@ The following hyperparameters were used during training:
75
  - seed: 42
76
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
77
  - lr_scheduler_type: linear
78
- - num_epochs: 30
79
  - mixed_precision_training: Native AMP
80
 
81
  ### Training results
82
 
83
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
84
  |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
85
- | 0.2324 | 0.6667 | 100 | 0.5368 | 0.8017 | 0.8375 | 0.8017 | 0.8072 |
86
- | 0.1099 | 1.3333 | 200 | 0.5944 | 0.8392 | 0.8642 | 0.8392 | 0.8369 |
87
- | 0.0592 | 2.0 | 300 | 0.5456 | 0.8733 | 0.8820 | 0.8733 | 0.8720 |
88
- | 0.0881 | 2.6667 | 400 | 1.3717 | 0.7342 | 0.8270 | 0.7342 | 0.6880 |
89
- | 0.0922 | 3.3333 | 500 | 1.1645 | 0.7667 | 0.8161 | 0.7667 | 0.7668 |
90
- | 0.0638 | 4.0 | 600 | 0.8597 | 0.8283 | 0.8389 | 0.8283 | 0.8270 |
91
- | 0.0296 | 4.6667 | 700 | 0.8513 | 0.8325 | 0.8603 | 0.8325 | 0.8307 |
92
- | 0.0023 | 5.3333 | 800 | 0.9016 | 0.8258 | 0.8277 | 0.8258 | 0.8252 |
93
- | 0.0826 | 6.0 | 900 | 0.9257 | 0.825 | 0.8303 | 0.825 | 0.8218 |
94
- | 0.0016 | 6.6667 | 1000 | 0.9261 | 0.83 | 0.8345 | 0.83 | 0.8276 |
95
- | 0.0034 | 7.3333 | 1100 | 1.1082 | 0.8225 | 0.8315 | 0.8225 | 0.8199 |
96
- | 0.001 | 8.0 | 1200 | 1.0582 | 0.8367 | 0.8399 | 0.8367 | 0.8343 |
97
- | 0.0008 | 8.6667 | 1300 | 1.0387 | 0.8417 | 0.8446 | 0.8417 | 0.8393 |
98
- | 0.001 | 9.3333 | 1400 | 0.9528 | 0.8433 | 0.8530 | 0.8433 | 0.8402 |
99
- | 0.0262 | 10.0 | 1500 | 0.8878 | 0.8533 | 0.8615 | 0.8533 | 0.8524 |
100
- | 0.001 | 10.6667 | 1600 | 0.9026 | 0.8317 | 0.8482 | 0.8317 | 0.8310 |
101
- | 0.0006 | 11.3333 | 1700 | 0.8520 | 0.8558 | 0.8665 | 0.8558 | 0.8501 |
102
- | 0.0454 | 12.0 | 1800 | 1.0215 | 0.8333 | 0.8556 | 0.8333 | 0.8308 |
103
- | 0.001 | 12.6667 | 1900 | 0.7867 | 0.8442 | 0.8445 | 0.8442 | 0.8417 |
104
- | 0.0005 | 13.3333 | 2000 | 0.8048 | 0.8608 | 0.8605 | 0.8608 | 0.8575 |
105
- | 0.0004 | 14.0 | 2100 | 0.8120 | 0.8617 | 0.8619 | 0.8617 | 0.8587 |
106
- | 0.0004 | 14.6667 | 2200 | 0.8208 | 0.8625 | 0.8630 | 0.8625 | 0.8595 |
107
- | 0.0003 | 15.3333 | 2300 | 0.8303 | 0.8617 | 0.8623 | 0.8617 | 0.8587 |
108
- | 0.0003 | 16.0 | 2400 | 0.8375 | 0.8625 | 0.8631 | 0.8625 | 0.8596 |
109
- | 0.0003 | 16.6667 | 2500 | 0.8439 | 0.8625 | 0.8631 | 0.8625 | 0.8596 |
110
- | 0.0003 | 17.3333 | 2600 | 0.8506 | 0.8625 | 0.8627 | 0.8625 | 0.8595 |
111
- | 0.0002 | 18.0 | 2700 | 0.8563 | 0.8633 | 0.8635 | 0.8633 | 0.8605 |
112
- | 0.0002 | 18.6667 | 2800 | 0.8621 | 0.8633 | 0.8636 | 0.8633 | 0.8605 |
113
- | 0.0002 | 19.3333 | 2900 | 0.8663 | 0.8633 | 0.8636 | 0.8633 | 0.8605 |
114
- | 0.0002 | 20.0 | 3000 | 0.8714 | 0.8625 | 0.8625 | 0.8625 | 0.8597 |
115
- | 0.0002 | 20.6667 | 3100 | 0.8761 | 0.8625 | 0.8625 | 0.8625 | 0.8597 |
116
- | 0.0002 | 21.3333 | 3200 | 0.8802 | 0.8625 | 0.8625 | 0.8625 | 0.8597 |
117
- | 0.0002 | 22.0 | 3300 | 0.8841 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
118
- | 0.0002 | 22.6667 | 3400 | 0.8879 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
119
- | 0.0002 | 23.3333 | 3500 | 0.8916 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
120
- | 0.0002 | 24.0 | 3600 | 0.8944 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
121
- | 0.0002 | 24.6667 | 3700 | 0.8973 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
122
- | 0.0002 | 25.3333 | 3800 | 0.9000 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
123
- | 0.0001 | 26.0 | 3900 | 0.9023 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
124
- | 0.0001 | 26.6667 | 4000 | 0.9042 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
125
- | 0.0001 | 27.3333 | 4100 | 0.9060 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
126
- | 0.0001 | 28.0 | 4200 | 0.9074 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
127
- | 0.0001 | 28.6667 | 4300 | 0.9085 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
128
- | 0.0001 | 29.3333 | 4400 | 0.9091 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
129
- | 0.0001 | 30.0 | 4500 | 0.9094 | 0.8633 | 0.8633 | 0.8633 | 0.8605 |
130
 
131
 
132
  ### Framework versions
 
26
  metrics:
27
  - name: Accuracy
28
  type: accuracy
29
+ value: 0.9075
30
  - name: Precision
31
  type: precision
32
+ value: 0.9136222146251665
33
  - name: Recall
34
  type: recall
35
+ value: 0.9075
36
  - name: F1
37
  type: f1
38
+ value: 0.904614447173649
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
42
  should probably proofread and complete it, then remove this comment. -->
43
 
 
44
  # vit-base-kidney-stone-Jonathan_El-Beze_-w256_1k_v1-_SUR
45
 
46
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
47
  It achieves the following results on the evaluation set:
48
+ - Loss: 0.4946
49
+ - Accuracy: 0.9075
50
+ - Precision: 0.9136
51
+ - Recall: 0.9075
52
+ - F1: 0.9046
53
 
54
  ## Model description
55
 
 
74
  - seed: 42
75
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
76
  - lr_scheduler_type: linear
77
+ - num_epochs: 15
78
  - mixed_precision_training: Native AMP
79
 
80
  ### Training results
81
 
82
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
83
  |:-------------:|:-------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
84
+ | 0.2895 | 0.6667 | 100 | 0.5586 | 0.795 | 0.8452 | 0.795 | 0.7997 |
85
+ | 0.0848 | 1.3333 | 200 | 0.8609 | 0.7975 | 0.8401 | 0.7975 | 0.7883 |
86
+ | 0.0782 | 2.0 | 300 | 0.7032 | 0.81 | 0.8414 | 0.81 | 0.8116 |
87
+ | 0.0158 | 2.6667 | 400 | 0.7198 | 0.8342 | 0.8570 | 0.8342 | 0.8336 |
88
+ | 0.0327 | 3.3333 | 500 | 0.7624 | 0.8458 | 0.8484 | 0.8458 | 0.8448 |
89
+ | 0.0044 | 4.0 | 600 | 0.6172 | 0.8792 | 0.8926 | 0.8792 | 0.8769 |
90
+ | 0.0032 | 4.6667 | 700 | 0.7772 | 0.8517 | 0.8589 | 0.8517 | 0.8496 |
91
+ | 0.0026 | 5.3333 | 800 | 0.8897 | 0.8375 | 0.8478 | 0.8375 | 0.8351 |
92
+ | 0.0033 | 6.0 | 900 | 0.4946 | 0.9075 | 0.9136 | 0.9075 | 0.9046 |
93
+ | 0.0019 | 6.6667 | 1000 | 0.6971 | 0.8725 | 0.8727 | 0.8725 | 0.8716 |
94
+ | 0.0016 | 7.3333 | 1100 | 0.7355 | 0.8692 | 0.8711 | 0.8692 | 0.8685 |
95
+ | 0.0136 | 8.0 | 1200 | 0.9004 | 0.8675 | 0.8900 | 0.8675 | 0.8613 |
96
+ | 0.0013 | 8.6667 | 1300 | 0.7646 | 0.875 | 0.8837 | 0.875 | 0.8715 |
97
+ | 0.0011 | 9.3333 | 1400 | 0.7833 | 0.875 | 0.8786 | 0.875 | 0.8729 |
98
+ | 0.0009 | 10.0 | 1500 | 0.7968 | 0.8767 | 0.8800 | 0.8767 | 0.8747 |
99
+ | 0.0009 | 10.6667 | 1600 | 0.8085 | 0.8758 | 0.8790 | 0.8758 | 0.8738 |
100
+ | 0.0008 | 11.3333 | 1700 | 0.8175 | 0.8758 | 0.8790 | 0.8758 | 0.8738 |
101
+ | 0.0008 | 12.0 | 1800 | 0.8242 | 0.8767 | 0.8801 | 0.8767 | 0.8746 |
102
+ | 0.0007 | 12.6667 | 1900 | 0.8292 | 0.8767 | 0.8801 | 0.8767 | 0.8746 |
103
+ | 0.0007 | 13.3333 | 2000 | 0.8335 | 0.8775 | 0.8812 | 0.8775 | 0.8754 |
104
+ | 0.0007 | 14.0 | 2100 | 0.8363 | 0.8775 | 0.8812 | 0.8775 | 0.8754 |
105
+ | 0.0007 | 14.6667 | 2200 | 0.8376 | 0.8775 | 0.8812 | 0.8775 | 0.8754 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
 
107
 
108
  ### Framework versions
all_results.json CHANGED
@@ -1,16 +1,16 @@
1
  {
2
- "epoch": 30.0,
3
- "eval_accuracy": 0.8016666666666666,
4
- "eval_f1": 0.8071519788135743,
5
- "eval_loss": 0.5368140339851379,
6
- "eval_precision": 0.8374648240970316,
7
- "eval_recall": 0.8016666666666666,
8
- "eval_runtime": 7.9045,
9
- "eval_samples_per_second": 151.813,
10
- "eval_steps_per_second": 18.977,
11
- "total_flos": 1.115924655734784e+19,
12
- "train_loss": 0.02178489219976796,
13
- "train_runtime": 1698.0764,
14
- "train_samples_per_second": 84.802,
15
- "train_steps_per_second": 2.65
16
  }
 
1
  {
2
+ "epoch": 15.0,
3
+ "eval_accuracy": 0.9075,
4
+ "eval_f1": 0.904614447173649,
5
+ "eval_loss": 0.49464890360832214,
6
+ "eval_precision": 0.9136222146251665,
7
+ "eval_recall": 0.9075,
8
+ "eval_runtime": 7.7726,
9
+ "eval_samples_per_second": 154.388,
10
+ "eval_steps_per_second": 19.299,
11
+ "total_flos": 5.57962327867392e+18,
12
+ "train_loss": 0.040586712151765826,
13
+ "train_runtime": 790.5824,
14
+ "train_samples_per_second": 91.072,
15
+ "train_steps_per_second": 2.846
16
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3fde94429d263f8493d77e111f68652f3825a6305c232194a29537e6c6590150
3
  size 343236280
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0c13c2cf6b8b84b51430e5d7bdd7f6ca8b85bfc2d295678e394b64a0d16eeb5
3
  size 343236280
test_results.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "epoch": 30.0,
3
- "eval_accuracy": 0.8016666666666666,
4
- "eval_f1": 0.8071519788135743,
5
- "eval_loss": 0.5368140339851379,
6
- "eval_precision": 0.8374648240970316,
7
- "eval_recall": 0.8016666666666666,
8
- "eval_runtime": 7.9045,
9
- "eval_samples_per_second": 151.813,
10
- "eval_steps_per_second": 18.977
11
  }
 
1
  {
2
+ "epoch": 15.0,
3
+ "eval_accuracy": 0.9075,
4
+ "eval_f1": 0.904614447173649,
5
+ "eval_loss": 0.49464890360832214,
6
+ "eval_precision": 0.9136222146251665,
7
+ "eval_recall": 0.9075,
8
+ "eval_runtime": 7.7726,
9
+ "eval_samples_per_second": 154.388,
10
+ "eval_steps_per_second": 19.299
11
  }
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "epoch": 30.0,
3
- "total_flos": 1.115924655734784e+19,
4
- "train_loss": 0.02178489219976796,
5
- "train_runtime": 1698.0764,
6
- "train_samples_per_second": 84.802,
7
- "train_steps_per_second": 2.65
8
  }
 
1
  {
2
+ "epoch": 15.0,
3
+ "total_flos": 5.57962327867392e+18,
4
+ "train_loss": 0.040586712151765826,
5
+ "train_runtime": 790.5824,
6
+ "train_samples_per_second": 91.072,
7
+ "train_steps_per_second": 2.846
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fc51826b9e9251cc47b9748254c456f8ae20097f2d8a1b09416282e7d7558f1b
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0b5adbbdbf8953d6da67cffcde72ba0d4ddd9de0a971364fa710e7d881a6baa
3
  size 5432