Edit model card

distilbert-base-uncased-lora-text-classification

This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3504
  • Accuracy: {'accuracy': 0.9220420199041651}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.4389 1.0 5426 0.3219 {'accuracy': 0.8962403243641726}
0.3786 2.0 10852 0.3866 {'accuracy': 0.8938444526354589}
0.4217 3.0 16278 0.3720 {'accuracy': 0.8986361960928861}
0.4178 4.0 21704 0.4612 {'accuracy': 0.8936601548101732}
0.3867 5.0 27130 0.4108 {'accuracy': 0.9001105786951714}
0.4258 6.0 32556 0.4565 {'accuracy': 0.9034279395503133}
0.4024 7.0 37982 0.4088 {'accuracy': 0.9102469590858828}
0.3556 8.0 43408 0.3828 {'accuracy': 0.9130114264651678}
0.3249 9.0 48834 0.3434 {'accuracy': 0.9194618503501659}
0.2882 10.0 54260 0.3504 {'accuracy': 0.9220420199041651}

Framework versions

  • PEFT 0.11.1
  • Transformers 4.37.2
  • Pytorch 2.3.0
  • Datasets 2.20.0
  • Tokenizers 0.15.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Rvk4/distilbert-base-uncased-lora-text-classification

Adapter
(197)
this model