my_distilbert_model
This model is a fine-tuned version of distilbert/distilbert-base-uncased on an imdb dataset. It achieves the following results on the evaluation set:
- Validation Loss: 0.2362
- Training Loss: 0.1441
- Accuracy: 0.9318
- Epoch: 2
Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts using the BERT base model.
Intended uses & limitations
You can use this model for sentence classification task only as it is fine-tuned on imdb dataset. If you want to try it for mask filling task, it won't give you good results.
Training and evaluation data
More information needed
Training procedure
- Preprocessing function was created to tokenize the text and truncate the sequences longer than DistilBERT max seq length. Datasets map function was used to apply the preprocessing func over the entire dataset.
- DataCollatorWithPadding is more efficiently method to dynamically pad the sequences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
- To evaluate model performance during training, it's quite helpful to include a metric. Load the accuracy metric from Evaluate library.
- Define training hyperparameters in TrainingArguments. To push the model to the hub, we need to set
push_to_hub=True
. At the end of each epoch, the Trainer will evaluate the accuracy and save the training checkpoint. - Pass the trainingargs to the Trainer along with the model, dataset, tokenizer, data_collator.
- Call train() to finetune your model.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.2247 | 1.0 | 1563 | 0.2066 | 0.9213 |
0.1441 | 2.0 | 3126 | 0.2362 | 0.9318 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
- Downloads last month
- 4
Model tree for HugeFighter/my_distilbert_model
Base model
distilbert/distilbert-base-uncased