PostsDesert commited on
Commit
b9d1653
1 Parent(s): 050f96a

End of training

Browse files
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: nvidia/mit-b5
4
+ tags:
5
+ - vision
6
+ - image-segmentation
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: segformer-b5-finetuned-segments-instryde-foot-test
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # segformer-b5-finetuned-segments-instryde-foot-test
17
+
18
+ This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the inStryde/inStrydeSegmentationFoot dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.0149
21
+ - Mean Iou: 0.4800
22
+ - Mean Accuracy: 0.9599
23
+ - Overall Accuracy: 0.9599
24
+ - Per Category Iou: [0.0, 0.9599216842864238]
25
+ - Per Category Accuracy: [nan, 0.9599216842864238]
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 6e-05
45
+ - train_batch_size: 8
46
+ - eval_batch_size: 8
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - num_epochs: 50
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
55
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------------:|:-------------------------:|
56
+ | 0.1024 | 0.27 | 20 | 0.2085 | 0.4534 | 0.9067 | 0.9067 | [0.0, 0.9067344993758137] | [nan, 0.9067344993758137] |
57
+ | 0.0431 | 0.53 | 40 | 0.0487 | 0.4604 | 0.9207 | 0.9207 | [0.0, 0.9207331455341442] | [nan, 0.9207331455341442] |
58
+ | 0.0354 | 0.8 | 60 | 0.0319 | 0.4577 | 0.9155 | 0.9155 | [0.0, 0.9154662028576415] | [nan, 0.9154662028576415] |
59
+ | 0.0389 | 1.07 | 80 | 0.0276 | 0.4629 | 0.9257 | 0.9257 | [0.0, 0.9257162800419576] | [nan, 0.9257162800419576] |
60
+ | 0.0208 | 1.33 | 100 | 0.0244 | 0.4702 | 0.9404 | 0.9404 | [0.0, 0.9403945317069335] | [nan, 0.9403945317069335] |
61
+ | 0.0241 | 1.6 | 120 | 0.0212 | 0.4703 | 0.9406 | 0.9406 | [0.0, 0.9406131407017349] | [nan, 0.9406131407017349] |
62
+ | 0.0167 | 1.87 | 140 | 0.0208 | 0.4761 | 0.9521 | 0.9521 | [0.0, 0.9521215619420916] | [nan, 0.9521215619420916] |
63
+ | 0.0156 | 2.13 | 160 | 0.0205 | 0.4612 | 0.9224 | 0.9224 | [0.0, 0.9224359945462809] | [nan, 0.9224359945462809] |
64
+ | 0.0156 | 2.4 | 180 | 0.0208 | 0.4734 | 0.9468 | 0.9468 | [0.0, 0.9467575875538612] | [nan, 0.9467575875538612] |
65
+ | 0.0167 | 2.67 | 200 | 0.0182 | 0.4833 | 0.9667 | 0.9667 | [0.0, 0.9666659635383208] | [nan, 0.9666659635383208] |
66
+ | 0.0145 | 2.93 | 220 | 0.0243 | 0.4351 | 0.8702 | 0.8702 | [0.0, 0.8702122233110058] | [nan, 0.8702122233110058] |
67
+ | 0.0114 | 3.2 | 240 | 0.0176 | 0.4686 | 0.9373 | 0.9373 | [0.0, 0.93726765603217] | [nan, 0.93726765603217] |
68
+ | 0.0155 | 3.47 | 260 | 0.0161 | 0.4770 | 0.9541 | 0.9541 | [0.0, 0.9540767701096305] | [nan, 0.9540767701096305] |
69
+ | 0.0158 | 3.73 | 280 | 0.0169 | 0.4684 | 0.9368 | 0.9368 | [0.0, 0.9368239181251786] | [nan, 0.9368239181251786] |
70
+ | 0.0114 | 4.0 | 300 | 0.0162 | 0.4777 | 0.9554 | 0.9554 | [0.0, 0.9554348305492647] | [nan, 0.9554348305492647] |
71
+ | 0.0112 | 4.27 | 320 | 0.0159 | 0.4839 | 0.9678 | 0.9678 | [0.0, 0.9677532556440432] | [nan, 0.9677532556440432] |
72
+ | 0.0131 | 4.53 | 340 | 0.0154 | 0.4811 | 0.9622 | 0.9622 | [0.0, 0.9622032718479555] | [nan, 0.9622032718479555] |
73
+ | 0.0101 | 4.8 | 360 | 0.0156 | 0.4683 | 0.9367 | 0.9367 | [0.0, 0.9366846987126999] | [nan, 0.9366846987126999] |
74
+ | 0.0102 | 5.07 | 380 | 0.0152 | 0.4758 | 0.9517 | 0.9517 | [0.0, 0.9516509773164403] | [nan, 0.9516509773164403] |
75
+ | 0.0101 | 5.33 | 400 | 0.0169 | 0.4884 | 0.9768 | 0.9768 | [0.0, 0.9768393358121804] | [nan, 0.9768393358121804] |
76
+ | 0.0082 | 5.6 | 420 | 0.0150 | 0.4761 | 0.9522 | 0.9522 | [0.0, 0.9522462074215836] | [nan, 0.9522462074215836] |
77
+ | 0.01 | 5.87 | 440 | 0.0152 | 0.4788 | 0.9576 | 0.9576 | [0.0, 0.9575745140264517] | [nan, 0.9575745140264517] |
78
+ | 0.0098 | 6.13 | 460 | 0.0148 | 0.4783 | 0.9565 | 0.9565 | [0.0, 0.9565489693736469] | [nan, 0.9565489693736469] |
79
+ | 0.0088 | 6.4 | 480 | 0.0153 | 0.4795 | 0.9591 | 0.9591 | [0.0, 0.959051850601846] | [nan, 0.959051850601846] |
80
+ | 0.0091 | 6.67 | 500 | 0.0152 | 0.4828 | 0.9656 | 0.9656 | [0.0, 0.965590177169167] | [nan, 0.965590177169167] |
81
+ | 0.0102 | 6.93 | 520 | 0.0149 | 0.4800 | 0.9599 | 0.9599 | [0.0, 0.9599216842864238] | [nan, 0.9599216842864238] |
82
+
83
+
84
+ ### Framework versions
85
+
86
+ - Transformers 4.37.2
87
+ - Pytorch 2.0.1
88
+ - Datasets 2.16.1
89
+ - Tokenizers 0.15.1
config.json ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "nvidia/mit-b5",
3
+ "architectures": [
4
+ "SegformerForSemanticSegmentation"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "classifier_dropout_prob": 0.1,
8
+ "decoder_hidden_size": 768,
9
+ "depths": [
10
+ 3,
11
+ 6,
12
+ 40,
13
+ 3
14
+ ],
15
+ "downsampling_rates": [
16
+ 1,
17
+ 4,
18
+ 8,
19
+ 16
20
+ ],
21
+ "drop_path_rate": 0.1,
22
+ "hidden_act": "gelu",
23
+ "hidden_dropout_prob": 0.0,
24
+ "hidden_sizes": [
25
+ 64,
26
+ 128,
27
+ 320,
28
+ 512
29
+ ],
30
+ "id2label": {
31
+ "0": "unlabeled",
32
+ "1": "foot"
33
+ },
34
+ "image_size": 224,
35
+ "initializer_range": 0.02,
36
+ "label2id": {
37
+ "foot": 1,
38
+ "unlabeled": 0
39
+ },
40
+ "layer_norm_eps": 1e-06,
41
+ "mlp_ratios": [
42
+ 4,
43
+ 4,
44
+ 4,
45
+ 4
46
+ ],
47
+ "model_type": "segformer",
48
+ "num_attention_heads": [
49
+ 1,
50
+ 2,
51
+ 5,
52
+ 8
53
+ ],
54
+ "num_channels": 3,
55
+ "num_encoder_blocks": 4,
56
+ "patch_sizes": [
57
+ 7,
58
+ 3,
59
+ 3,
60
+ 3
61
+ ],
62
+ "reshape_last_stage": true,
63
+ "semantic_loss_ignore_index": 255,
64
+ "sr_ratios": [
65
+ 8,
66
+ 4,
67
+ 2,
68
+ 1
69
+ ],
70
+ "strides": [
71
+ 4,
72
+ 2,
73
+ 2,
74
+ 2
75
+ ],
76
+ "torch_dtype": "float32",
77
+ "transformers_version": "4.37.2"
78
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce29dd48eada291f5039788d8b4317118908240b7658d0ae9ded6d38f8fcc8e6
3
+ size 338528440
preprocessor_config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "do_reduce_labels": false,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.485,
8
+ 0.456,
9
+ 0.406
10
+ ],
11
+ "image_processor_type": "SegformerImageProcessor",
12
+ "image_std": [
13
+ 0.229,
14
+ 0.224,
15
+ 0.225
16
+ ],
17
+ "resample": 2,
18
+ "rescale_factor": 0.00392156862745098,
19
+ "size": {
20
+ "height": 512,
21
+ "width": 512
22
+ }
23
+ }
runs/Feb08_03-28-15_150-136-220-15/events.out.tfevents.1707362897.150-136-220-15.5227.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e16fc8d63d492b18c758650d2a157f2747947c561965baa29a7019016043659
3
+ size 97951
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e78cf1dff8e240d042bf98cff5646df81219738838880747b9e2960e8b8e1d2e
3
+ size 4283