Edit model card

Fine-tuned Multilingual BERT for multi-label emotion classification task.

Model was trained on lv_emotions dataset. This dataset is Latvian translation of GoEmotions and Twitter Emotions dataset. Google Translate was used to generate the machine translation.

Original 26 emotions were mapped to 6 base emotions as per Dr. Ekman theory.

Labels predicted by classifier:

0: anger
1: disgust
2: fear
3: joy
4: sadness
5: surprise
6: neutral

Seed used for random number generator is 42:

def set_seed(seed=42):
    random.seed(seed)
    np.random.seed(seed)
    torch.manual_seed(seed)
    if torch.cuda.is_available():
        torch.cuda.manual_seed_all(seed)

Training parameters:

max_length: null
batch_size: 32
shuffle: True
num_workers: 4
pin_memory: False
drop_last: False
optimizer: adam
lr: 0.00001
weight_decay: 0
problem_type: multi_label_classification
num_epochs: 4

Evaluation results on test split of lv_go_emotions

Precision Recall F1-Score Support
anger 0.50 0.35 0.41 726
disgust 0.44 0.28 0.35 123
fear 0.58 0.47 0.52 98
joy 0.80 0.76 0.78 2104
sadness 0.66 0.41 0.51 379
surprise 0.59 0.55 0.57 677
neutral 0.71 0.43 0.54 1787
micro avg 0.70 0.55 0.62 5894
macro avg 0.61 0.46 0.52 5894
weighted avg 0.69 0.55 0.61 5894
samples avg 0.58 0.56 0.57 5894

Evaluation results on test split of lv_twitter_emotions

Precision Recall F1-Score Support
anger 0.92 0.88 0.90 12013
disgust 0.90 0.94 0.92 14117
fear 0.82 0.67 0.74 3342
joy 0.88 0.84 0.86 5913
sadness 0.86 0.75 0.80 4786
surprise 0.94 0.56 0.70 1510
neutral 0.00 0.00 0.00 0
micro avg 0.90 0.85 0.87 41681
macro avg 0.76 0.66 0.70 41681
weighted avg 0.90 0.85 0.87 41681
samples avg 0.85 0.85 0.85 41681
Downloads last month
8
Safetensors
Model size
178M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for SkyWater21/mbert-lv-emotions-ekman

Finetuned
(511)
this model

Dataset used to train SkyWater21/mbert-lv-emotions-ekman