metadata
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-hatespeech
results: []
finetuned-distilbert-news-article-catgorization
This model is a fine-tuned version of distilbert-base-uncased on the news_article_categorization dataset. It achieves the following results on the evaluation set:
- Loss: 0.01338
- F1_score(weighted): 1.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
The model was trained on some subset of the news_article_categorization dataset and it was validated on the remaining subset of the data
Training procedure
More information needed
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-5
- train_batch_size: 3
- eval_batch_size: 3
- seed: 17
- optimizer: AdamW(lr=1e-5 and epsilon=1e-08)
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0
- num_epochs: 5
Training results
Training Loss | Epoch | Validation Loss | f1 score |
---|---|---|---|
0.5176 | 1.0 | 0.0466 | 0.9838 |
0.0513 | 2.0 | 0.0051 | 1.0000 |
0.0320 | 3.0 | 0.0032 | 1.0000 |
0.0229 | 4.0 | 0.0018 | 1.0000 |
0.0133 | 5.0 | 0.0017 | 1.0000 |