--- library_name: transformers license: apache-2.0 base_model: distilbert/distilroberta-base tags: - generated_from_trainer - sentiment_analysis model-index: - name: augmented-go-emotions-plus-other-datasets-fine-tuned-distilroberta-v3 results: [] datasets: - google-research-datasets/go_emotions language: - en metrics: - f1 - precision - recall --- # augmented-go-emotions-plus-other-datasets-fine-tuned-distilroberta-v3 This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the these datasets: - [GoEmotions](https://github.com/google-research/google-research/tree/master/goemotions) - [sem_eval_2018_task_1 (English)](https://huggingface.co/datasets/SemEvalWorkshop/sem_eval_2018_task_1) - [Emotion Detection from Text - Pashupati Gupta](https://www.kaggle.com/datasets/pashupatigupta/emotion-detection-from-text/data) - [Emotions dataset for NLP - praveengovi](https://www.kaggle.com/datasets/praveengovi/emotions-dataset-for-nlp/data) It has also been data augmented using TextAttack. On top of the (first version)[https://huggingface.co/paradoxmaske/augmented-go-emotions-plus-other-datasets-fine-tuned-distilroberta] of the model, V3 added more data augmentation - EasyDataAugmenter on all labels except labels with a lot of examples [neutral (27), sadness (25), joy (17), love (18), anger (2)]. - CharSwapAugmenter on labels with very few examples compared to others: relief (23), confusion (6), disappointment (9), realization (22), caring (5), excitement (13), desire (8), remorse (24), embarrassment (12), nervousness (19), pride (21), grief (16). It achieves the following results on the evaluation set: - Loss: 0.0822 - Micro Precision: 0.6806 - Micro Recall: 0.5843 - Micro F1: 0.6288 - Macro Precision: 0.5709 - Macro Recall: 0.4553 - Macro F1: 0.4950 - Weighted Precision: 0.6654 - Weighted Recall: 0.5843 - Weighted F1: 0.6196 - Hamming Loss: 0.0293 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Micro Precision | Micro Recall | Micro F1 | Macro Precision | Macro Recall | Macro F1 | Weighted Precision | Weighted Recall | Weighted F1 | Hamming Loss | |:-------------:|:-----:|:-----:|:---------------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|:------------:| | No log | 1.0 | 18454 | 0.0800 | 0.7272 | 0.4822 | 0.5799 | 0.6082 | 0.3841 | 0.4436 | 0.7271 | 0.4822 | 0.5609 | 0.0297 | | No log | 2.0 | 36908 | 0.0780 | 0.6895 | 0.5674 | 0.6225 | 0.5850 | 0.4612 | 0.4999 | 0.6800 | 0.5674 | 0.6109 | 0.0293 | | No log | 3.0 | 55362 | 0.0822 | 0.6806 | 0.5843 | 0.6288 | 0.5709 | 0.4553 | 0.4950 | 0.6654 | 0.5843 | 0.6196 | 0.0293 | ### Test results | Label | Precision | Recall | F1-Score | Support | |-----------------|-----------|--------|----------|---------| | admiration | 0.61 | 0.66 | 0.64 | 504 | | amusement | 0.73 | 0.83 | 0.78 | 264 | | anger | 0.79 | 0.67 | 0.72 | 1585 | | annoyance | 0.39 | 0.20 | 0.26 | 320 | | approval | 0.44 | 0.31 | 0.37 | 351 | | caring | 0.38 | 0.29 | 0.33 | 135 | | confusion | 0.43 | 0.42 | 0.43 | 153 | | curiosity | 0.47 | 0.45 | 0.46 | 284 | | desire | 0.51 | 0.30 | 0.38 | 83 | | disappointment | 0.28 | 0.20 | 0.23 | 151 | | disapproval | 0.41 | 0.30 | 0.35 | 267 | | disgust | 0.71 | 0.60 | 0.65 | 1222 | | embarrassment | 0.43 | 0.27 | 0.33 | 37 | | excitement | 0.40 | 0.38 | 0.39 | 103 | | fear | 0.78 | 0.74 | 0.76 | 787 | | gratitude | 0.93 | 0.88 | 0.91 | 352 | | grief | 0.50 | 0.17 | 0.25 | 6 | | joy | 0.88 | 0.76 | 0.81 | 2298 | | love | 0.69 | 0.61 | 0.65 | 1305 | | nervousness | 0.39 | 0.30 | 0.34 | 23 | | optimism | 0.70 | 0.58 | 0.64 | 1329 | | pride | 0.62 | 0.31 | 0.42 | 16 | | realization | 0.32 | 0.16 | 0.21 | 145 | | relief | 0.19 | 0.15 | 0.17 | 160 | | remorse | 0.61 | 0.75 | 0.67 | 56 | | sadness | 0.75 | 0.66 | 0.71 | 2212 | | surprise | 0.49 | 0.36 | 0.42 | 572 | | neutral | 0.65 | 0.54 | 0.59 | 2668 | | **Micro Avg** | 0.70 | 0.60 | 0.64 | 17388 | | **Macro Avg** | 0.55 | 0.46 | 0.49 | 17388 | | **Weighted Avg**| 0.69 | 0.60 | 0.64 | 17388 | | **Samples Avg** | 0.64 | 0.61 | 0.61 | 17388 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.21.0