File size: 3,025 Bytes
44459b5 88e0503 cbfaba0 44459b5 88e0503 cbfaba0 44459b5 e7afeb9 aa771c1 e7afeb9 aa771c1 e7afeb9 aa771c1 44459b5 e7afeb9 9a7bba8 e7afeb9 c0dca8b e7afeb9 44459b5 88e0503 44459b5 88e0503 44459b5 f4d6cda 88e0503 26ea0d3 44459b5 88e0503 44459b5 88e0503 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- emotions
- sentiment-analysis
model-index:
- name: Distilbert-base-uncased_dair-ai_emotion
results: []
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
datasets:
- dair-ai/emotion
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Arjun4707/Distilbert-base-uncased_dair-ai_emotion")
model = AutoModelForSequenceClassification.from_pretrained("Arjun4707/Distilbert-base-uncased_dair-ai_emotion", from_tf = True)
for more check out this notebook: https://github.com/BhammarArjun/NLP/blob/main/Model_validation_distilbert_emotions.ipynb
## Model description
Model takes text as input and outputs an predictions for one of the 6 emotions.
[label_0 :'anger', label_1 : 'fear',
label_2 : 'joy', label_3 : 'love',
label_4 : 'sadness', label_5 : 'surprise']
# Distilbert-base-uncased_dair-ai_emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an dair-ai/emotion dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0896
- Train Accuracy: 0.9582
- Validation Loss: 0.1326
- Validation Accuracy: 0.9375
- Epoch: 4
## Intended uses & limitations
Use to identify an emotion of a user from above mentioned emotions. The statements starts with 'I' in data. Need further training
## Training and evaluation data
Training data size = 16000, validation data = 2000, and test data = 2000
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5820 | 0.8014 | 0.2002 | 0.9305 | 0 |
| 0.1598 | 0.9366 | 0.1431 | 0.9355 | 1 |
| 0.1101 | 0.9515 | 0.1390 | 0.9355 | 2 |
| 0.0896 | 0.9582 | 0.1326 | 0.9375 | 3 | |