metadata
license: mit
datasets:
- dair-ai/emotion
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
Model Description
Bert is a Transformer Bidirectional Encoder based Architecture trained on MLM(Mask Language Modeling) objective
bert-base-uncased-emotion-fituned finetuned on the emotion dataset using HuggingFace Trainer with below training parameters
num_train_epochs=8,
train_batch_size=32,
eval_batch_size=64,
warmup_steps=500,
weight_decay=0.01
Dataset
Model Performance Comparision on Emotion Dataset
Model | Accuracy | Recall | F1 Score |
---|---|---|---|
Bert-base-uncased-emotion (SOTA) | 92.6 | 87.9 | 88.2 |
Bert-base-uncased-emotion-fintuned | 92.9 | 88 | 88.5 |
How to Use the Model:
from transformers import pipeline
classifier = pipeline("text-classification",model='sonia12138/bert-base-uncased-emotion-fituned', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)
Model Sources
- Repository: More Information Needed
Eval Results
{
'eval_accuracy': 0.929,
'eval_f1': 0.9405920712282673,
'eval_loss': 0.15769127011299133,
'eval_loss': 0.37796708941459656,
"eval_runtime': 8.0514,
'eval_samples_per_second': 248.403,
'eval_steps_per_second': 3.974,
}
Compute Infrastructure
Hardware
NVIDIA GeForce RTX 4090
Software
22.04.1-Ubuntu