--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_keras_callback model-index: - name: NabeelShar/emotions_classifier results: [] --- # NabeelShar/emotions_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1967 - Validation Loss: 2.2308 - Train Accuracy: 0.4125 - Epoch: 19 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.0003, 'decay_steps': 3200, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.1} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.9733 | 1.7759 | 0.4 | 0 | | 0.9887 | 1.5178 | 0.4562 | 1 | | 0.9215 | 1.5833 | 0.4062 | 2 | | 0.8050 | 1.6781 | 0.3875 | 3 | | 0.7375 | 1.6757 | 0.45 | 4 | | 0.8264 | 1.4876 | 0.475 | 5 | | 0.6653 | 1.5971 | 0.4813 | 6 | | 0.5938 | 1.8312 | 0.4188 | 7 | | 0.5813 | 1.7869 | 0.4625 | 8 | | 0.5660 | 1.8466 | 0.4375 | 9 | | 0.5113 | 1.8829 | 0.4437 | 10 | | 0.4774 | 1.7442 | 0.4188 | 11 | | 0.3519 | 2.0356 | 0.4313 | 12 | | 0.4080 | 1.9572 | 0.4313 | 13 | | 0.3292 | 1.9208 | 0.4437 | 14 | | 0.2668 | 2.0447 | 0.4562 | 15 | | 0.2678 | 2.1450 | 0.3937 | 16 | | 0.2556 | 2.1900 | 0.4562 | 17 | | 0.1748 | 2.1947 | 0.4625 | 18 | | 0.1967 | 2.2308 | 0.4125 | 19 | ### Framework versions - Transformers 4.33.1 - TensorFlow 2.13.0 - Datasets 2.14.5 - Tokenizers 0.13.3