--- base_model: microsoft/mdeberta-v3-base library_name: transformers license: mit metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: scenario-NON-KD-SCR-COPY-CDF-EN-D2_data-en-cardiff_eng_only44 results: [] --- # scenario-NON-KD-SCR-COPY-CDF-EN-D2_data-en-cardiff_eng_only44 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.0909 - Accuracy: 0.3682 - F1: 0.3646 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 44 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:| | No log | 1.7241 | 100 | 1.1864 | 0.3651 | 0.3644 | | No log | 3.4483 | 200 | 2.2167 | 0.3580 | 0.3355 | | No log | 5.1724 | 300 | 2.7873 | 0.3673 | 0.3618 | | No log | 6.8966 | 400 | 3.5495 | 0.3739 | 0.3714 | | 0.4197 | 8.6207 | 500 | 4.2289 | 0.3770 | 0.3708 | | 0.4197 | 10.3448 | 600 | 4.6578 | 0.3638 | 0.3605 | | 0.4197 | 12.0690 | 700 | 4.5844 | 0.3690 | 0.3671 | | 0.4197 | 13.7931 | 800 | 4.8103 | 0.3616 | 0.3462 | | 0.4197 | 15.5172 | 900 | 4.8621 | 0.3616 | 0.3545 | | 0.017 | 17.2414 | 1000 | 4.9407 | 0.3708 | 0.3657 | | 0.017 | 18.9655 | 1100 | 5.0334 | 0.3699 | 0.3696 | | 0.017 | 20.6897 | 1200 | 4.9701 | 0.3686 | 0.3676 | | 0.017 | 22.4138 | 1300 | 4.9793 | 0.3686 | 0.3654 | | 0.017 | 24.1379 | 1400 | 5.0299 | 0.3668 | 0.3600 | | 0.0076 | 25.8621 | 1500 | 5.1558 | 0.3616 | 0.3544 | | 0.0076 | 27.5862 | 1600 | 5.0915 | 0.3668 | 0.3622 | | 0.0076 | 29.3103 | 1700 | 5.0909 | 0.3682 | 0.3646 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.19.1