TaLong commited on
Commit
40ad81d
1 Parent(s): 2998423

End of training

Browse files
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: google/vit-base-patch16-224
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - medmnist-v2
9
+ metrics:
10
+ - accuracy
11
+ - f1
12
+ model-index:
13
+ - name: ViT_breastmnist
14
+ results:
15
+ - task:
16
+ name: Image Classification
17
+ type: image-classification
18
+ dataset:
19
+ name: medmnist-v2
20
+ type: medmnist-v2
21
+ config: breastmnist
22
+ split: validation
23
+ args: breastmnist
24
+ metrics:
25
+ - name: Accuracy
26
+ type: accuracy
27
+ value: 0.8653846153846154
28
+ - name: F1
29
+ type: f1
30
+ value: 0.8156962025316457
31
+ ---
32
+
33
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
34
+ should probably proofread and complete it, then remove this comment. -->
35
+
36
+ # ViT_breastmnist
37
+
38
+ This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the medmnist-v2 dataset.
39
+ It achieves the following results on the evaluation set:
40
+ - Loss: 0.3570
41
+ - Accuracy: 0.8654
42
+ - F1: 0.8157
43
+
44
+ ## Model description
45
+
46
+ More information needed
47
+
48
+ ## Intended uses & limitations
49
+
50
+ More information needed
51
+
52
+ ## Training and evaluation data
53
+
54
+ More information needed
55
+
56
+ ## Training procedure
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following hyperparameters were used during training:
61
+ - learning_rate: 5e-05
62
+ - train_batch_size: 32
63
+ - eval_batch_size: 8
64
+ - seed: 42
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 10
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
72
+ |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
73
+ | 0.5391 | 0.5556 | 10 | 0.4007 | 0.7949 | 0.6698 |
74
+ | 0.3685 | 1.1111 | 20 | 0.3650 | 0.8718 | 0.8120 |
75
+ | 0.2275 | 1.6667 | 30 | 0.3601 | 0.8462 | 0.8101 |
76
+ | 0.1604 | 2.2222 | 40 | 0.2938 | 0.8718 | 0.8319 |
77
+ | 0.0624 | 2.7778 | 50 | 0.2966 | 0.8846 | 0.8511 |
78
+ | 0.0597 | 3.3333 | 60 | 0.4313 | 0.8974 | 0.8556 |
79
+ | 0.029 | 3.8889 | 70 | 0.4105 | 0.8718 | 0.8194 |
80
+ | 0.0094 | 4.4444 | 80 | 0.3746 | 0.9103 | 0.8803 |
81
+ | 0.0077 | 5.0 | 90 | 0.4098 | 0.8974 | 0.8655 |
82
+ | 0.0082 | 5.5556 | 100 | 0.4451 | 0.9103 | 0.8803 |
83
+ | 0.0024 | 6.1111 | 110 | 0.4599 | 0.8974 | 0.8655 |
84
+ | 0.0028 | 6.6667 | 120 | 0.4739 | 0.8974 | 0.8608 |
85
+ | 0.0013 | 7.2222 | 130 | 0.4653 | 0.8974 | 0.8655 |
86
+ | 0.0016 | 7.7778 | 140 | 0.4927 | 0.8974 | 0.8608 |
87
+ | 0.0011 | 8.3333 | 150 | 0.5115 | 0.8974 | 0.8608 |
88
+ | 0.0015 | 8.8889 | 160 | 0.5055 | 0.8974 | 0.8608 |
89
+ | 0.0007 | 9.4444 | 170 | 0.4982 | 0.8974 | 0.8608 |
90
+ | 0.0011 | 10.0 | 180 | 0.4975 | 0.8974 | 0.8608 |
91
+
92
+
93
+ ### Framework versions
94
+
95
+ - Transformers 4.45.1
96
+ - Pytorch 2.4.0
97
+ - Datasets 3.0.1
98
+ - Tokenizers 0.20.0
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/vit-base-patch16-224",
3
+ "architectures": [
4
+ "ViTForImageClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "encoder_stride": 16,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.0,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": "malignant",
13
+ "1": "normal, benign"
14
+ },
15
+ "image_size": 224,
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 3072,
18
+ "label2id": {
19
+ "malignant": "0",
20
+ "normal, benign": "1"
21
+ },
22
+ "layer_norm_eps": 1e-12,
23
+ "model_type": "vit",
24
+ "num_attention_heads": 12,
25
+ "num_channels": 3,
26
+ "num_hidden_layers": 12,
27
+ "patch_size": 16,
28
+ "problem_type": "single_label_classification",
29
+ "qkv_bias": true,
30
+ "torch_dtype": "float32",
31
+ "transformers_version": "4.45.1"
32
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6026bbd14847ad9e6ed13db75511931ca52f2f45425acacf697b24cc8c0ffdba
3
+ size 343223968
preprocessor_config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "do_rescale": true,
4
+ "do_resize": true,
5
+ "image_mean": [
6
+ 0.5,
7
+ 0.5,
8
+ 0.5
9
+ ],
10
+ "image_processor_type": "ViTImageProcessor",
11
+ "image_std": [
12
+ 0.5,
13
+ 0.5,
14
+ 0.5
15
+ ],
16
+ "resample": 2,
17
+ "rescale_factor": 0.00392156862745098,
18
+ "size": {
19
+ "height": 224,
20
+ "width": 224
21
+ }
22
+ }
runs/Oct19_02-25-41_120f3e0b8117/events.out.tfevents.1729304744.120f3e0b8117.23.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc07ba50d540a0d66bdca504e9d1402c5c1bdeada959246198e791868043a014
3
+ size 15622
runs/Oct19_02-25-41_120f3e0b8117/events.out.tfevents.1729304913.120f3e0b8117.23.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a80c9a75dbd2f4ba596fb5d4ea643e60fc1b95c179de34dd7bd1e5390a12708b
3
+ size 457
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b037c54a26df5b828ae0db71989b47eed9e91d42d883121b9ff25013597ef68
3
+ size 5176