Distilbert-finetuned-emotion
Distilbert is a variant of bert model(one of LLM models). This model with a classification head is used to classify the emotions of the input tweet. This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set:
- Loss: 0.2195
- Accuracy: 0.9235
- F1: 0.9233
Emotion Labels
- label_0: Sadness
- label_1: Joy
- label_2: Love
- label_3: Anger
- label_4: Fear
- label_5: Surprise
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
0.8537 | 1.0 | 250 | 0.3235 | 0.897 | 0.8958 |
0.2506 | 2.0 | 500 | 0.2195 | 0.9235 | 0.9233 |
Validation metrics
- test_loss : 0.2194512039422989
- test_accuracy : 0.9235
- test_f1 : 0.923296474937779
Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
- Downloads last month
- 25
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for pt-sk/distilbert-finetuned-emotion
Base model
distilbert/distilbert-base-uncasedDataset used to train pt-sk/distilbert-finetuned-emotion
Evaluation results
- Accuracy on emotionvalidation set self-reported0.923
- F1 on emotionvalidation set self-reported0.923