metadata
license: other
tags:
- generated_from_keras_callback
model-index:
- name: AhamadShaik/SegFormer_RESIZE_NLM
results: []
AhamadShaik/SegFormer_RESIZE_NLM
This model is a fine-tuned version of nvidia/mit-b0 on an unknown dataset. It achieves the following results on the evaluation set:
- Train Loss: 0.0886
- Train Dice Coef: 0.7856
- Train Iou: 0.6535
- Validation Loss: 0.0584
- Validation Dice Coef: 0.8603
- Validation Iou: 0.7566
- Train Lr: 1e-04
- Epoch: 5
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
Training results
Train Loss | Train Dice Coef | Train Iou | Validation Loss | Validation Dice Coef | Validation Iou | Train Lr | Epoch |
---|---|---|---|---|---|---|---|
0.2282 | 0.5657 | 0.4102 | 0.1322 | 0.6524 | 0.4967 | 1e-04 | 0 |
0.1354 | 0.6853 | 0.5329 | 0.0855 | 0.7853 | 0.6544 | 1e-04 | 1 |
0.1105 | 0.7364 | 0.5924 | 0.0737 | 0.8147 | 0.6916 | 1e-04 | 2 |
0.0985 | 0.7610 | 0.6226 | 0.0632 | 0.8518 | 0.7440 | 1e-04 | 3 |
0.0933 | 0.7745 | 0.6399 | 0.0627 | 0.8455 | 0.7351 | 1e-04 | 4 |
0.0886 | 0.7856 | 0.6535 | 0.0584 | 0.8603 | 0.7566 | 1e-04 | 5 |
Framework versions
- Transformers 4.27.4
- TensorFlow 2.10.1
- Datasets 2.11.0
- Tokenizers 0.13.3