--- base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: arabert_cross_organization_task1_fold0 results: [] --- # arabert_cross_organization_task1_fold0 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6973 - Qwk: 0.4121 - Mse: 0.6973 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | |:-------------:|:------:|:----:|:---------------:|:------:|:------:| | No log | 0.1333 | 2 | 2.5684 | 0.0116 | 2.5684 | | No log | 0.2667 | 4 | 1.7358 | 0.0856 | 1.7358 | | No log | 0.4 | 6 | 0.7580 | 0.1307 | 0.7580 | | No log | 0.5333 | 8 | 1.0787 | 0.0341 | 1.0787 | | No log | 0.6667 | 10 | 0.9103 | 0.1678 | 0.9103 | | No log | 0.8 | 12 | 1.1212 | 0.2241 | 1.1212 | | No log | 0.9333 | 14 | 1.2424 | 0.2061 | 1.2424 | | No log | 1.0667 | 16 | 0.9462 | 0.2685 | 0.9462 | | No log | 1.2 | 18 | 0.6233 | 0.4273 | 0.6233 | | No log | 1.3333 | 20 | 0.5885 | 0.4743 | 0.5885 | | No log | 1.4667 | 22 | 0.6616 | 0.3952 | 0.6616 | | No log | 1.6 | 24 | 0.8153 | 0.3126 | 0.8153 | | No log | 1.7333 | 26 | 0.8278 | 0.3127 | 0.8278 | | No log | 1.8667 | 28 | 0.6680 | 0.3884 | 0.6680 | | No log | 2.0 | 30 | 0.5761 | 0.4959 | 0.5761 | | No log | 2.1333 | 32 | 0.5793 | 0.4845 | 0.5793 | | No log | 2.2667 | 34 | 0.6288 | 0.4575 | 0.6288 | | No log | 2.4 | 36 | 0.8037 | 0.3583 | 0.8037 | | No log | 2.5333 | 38 | 0.9122 | 0.3147 | 0.9122 | | No log | 2.6667 | 40 | 0.8141 | 0.3536 | 0.8141 | | No log | 2.8 | 42 | 0.6941 | 0.3971 | 0.6941 | | No log | 2.9333 | 44 | 0.5685 | 0.4691 | 0.5685 | | No log | 3.0667 | 46 | 0.5381 | 0.5160 | 0.5381 | | No log | 3.2 | 48 | 0.5770 | 0.4617 | 0.5770 | | No log | 3.3333 | 50 | 0.6630 | 0.4108 | 0.6630 | | No log | 3.4667 | 52 | 0.6155 | 0.4490 | 0.6155 | | No log | 3.6 | 54 | 0.5977 | 0.4930 | 0.5977 | | No log | 3.7333 | 56 | 0.6466 | 0.4721 | 0.6466 | | No log | 3.8667 | 58 | 0.6835 | 0.4250 | 0.6835 | | No log | 4.0 | 60 | 0.7296 | 0.3879 | 0.7296 | | No log | 4.1333 | 62 | 0.6819 | 0.4354 | 0.6819 | | No log | 4.2667 | 64 | 0.5945 | 0.4884 | 0.5945 | | No log | 4.4 | 66 | 0.5786 | 0.5050 | 0.5786 | | No log | 4.5333 | 68 | 0.6160 | 0.4670 | 0.6160 | | No log | 4.6667 | 70 | 0.7401 | 0.3934 | 0.7401 | | No log | 4.8 | 72 | 0.8433 | 0.3505 | 0.8433 | | No log | 4.9333 | 74 | 0.7844 | 0.3718 | 0.7844 | | No log | 5.0667 | 76 | 0.6636 | 0.4417 | 0.6636 | | No log | 5.2 | 78 | 0.5945 | 0.4591 | 0.5945 | | No log | 5.3333 | 80 | 0.5972 | 0.4543 | 0.5972 | | No log | 5.4667 | 82 | 0.6419 | 0.4316 | 0.6419 | | No log | 5.6 | 84 | 0.6847 | 0.4354 | 0.6847 | | No log | 5.7333 | 86 | 0.6772 | 0.4472 | 0.6772 | | No log | 5.8667 | 88 | 0.6758 | 0.4430 | 0.6758 | | No log | 6.0 | 90 | 0.7271 | 0.4105 | 0.7271 | | No log | 6.1333 | 92 | 0.8108 | 0.3650 | 0.8108 | | No log | 6.2667 | 94 | 0.7936 | 0.3602 | 0.7936 | | No log | 6.4 | 96 | 0.6779 | 0.4077 | 0.6779 | | No log | 6.5333 | 98 | 0.5967 | 0.4507 | 0.5967 | | No log | 6.6667 | 100 | 0.5802 | 0.4695 | 0.5802 | | No log | 6.8 | 102 | 0.6079 | 0.4518 | 0.6079 | | No log | 6.9333 | 104 | 0.6858 | 0.4235 | 0.6858 | | No log | 7.0667 | 106 | 0.7702 | 0.3774 | 0.7702 | | No log | 7.2 | 108 | 0.8119 | 0.3543 | 0.8119 | | No log | 7.3333 | 110 | 0.7674 | 0.3774 | 0.7674 | | No log | 7.4667 | 112 | 0.6887 | 0.4250 | 0.6887 | | No log | 7.6 | 114 | 0.6566 | 0.4371 | 0.6566 | | No log | 7.7333 | 116 | 0.6477 | 0.4363 | 0.6477 | | No log | 7.8667 | 118 | 0.6556 | 0.4276 | 0.6556 | | No log | 8.0 | 120 | 0.6794 | 0.4219 | 0.6794 | | No log | 8.1333 | 122 | 0.7142 | 0.4001 | 0.7142 | | No log | 8.2667 | 124 | 0.7342 | 0.3849 | 0.7342 | | No log | 8.4 | 126 | 0.7384 | 0.3907 | 0.7384 | | No log | 8.5333 | 128 | 0.7240 | 0.3993 | 0.7240 | | No log | 8.6667 | 130 | 0.7146 | 0.4095 | 0.7146 | | No log | 8.8 | 132 | 0.7179 | 0.4131 | 0.7179 | | No log | 8.9333 | 134 | 0.7112 | 0.4145 | 0.7112 | | No log | 9.0667 | 136 | 0.7020 | 0.4168 | 0.7020 | | No log | 9.2 | 138 | 0.7089 | 0.4124 | 0.7089 | | No log | 9.3333 | 140 | 0.7179 | 0.4051 | 0.7179 | | No log | 9.4667 | 142 | 0.7143 | 0.4015 | 0.7143 | | No log | 9.6 | 144 | 0.7043 | 0.4104 | 0.7043 | | No log | 9.7333 | 146 | 0.6982 | 0.4121 | 0.6982 | | No log | 9.8667 | 148 | 0.6980 | 0.4121 | 0.6980 | | No log | 10.0 | 150 | 0.6973 | 0.4121 | 0.6973 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1