--- base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: arabert_cross_relevance_task2_fold5 results: [] --- # arabert_cross_relevance_task2_fold5 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2256 - Qwk: 0.3506 - Mse: 0.2255 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | |:-------------:|:------:|:----:|:---------------:|:------:|:------:| | No log | 0.1333 | 2 | 0.3683 | 0.2902 | 0.3681 | | No log | 0.2667 | 4 | 0.2831 | 0.3575 | 0.2825 | | No log | 0.4 | 6 | 0.2856 | 0.1974 | 0.2849 | | No log | 0.5333 | 8 | 0.2976 | 0.3064 | 0.2970 | | No log | 0.6667 | 10 | 0.2707 | 0.3279 | 0.2703 | | No log | 0.8 | 12 | 0.2262 | 0.2826 | 0.2256 | | No log | 0.9333 | 14 | 0.2683 | 0.2601 | 0.2679 | | No log | 1.0667 | 16 | 0.2578 | 0.2758 | 0.2575 | | No log | 1.2 | 18 | 0.2380 | 0.3020 | 0.2377 | | No log | 1.3333 | 20 | 0.2190 | 0.3457 | 0.2188 | | No log | 1.4667 | 22 | 0.1962 | 0.3457 | 0.1960 | | No log | 1.6 | 24 | 0.1880 | 0.3562 | 0.1878 | | No log | 1.7333 | 26 | 0.1845 | 0.3547 | 0.1844 | | No log | 1.8667 | 28 | 0.1940 | 0.3485 | 0.1940 | | No log | 2.0 | 30 | 0.1851 | 0.3515 | 0.1850 | | No log | 2.1333 | 32 | 0.1774 | 0.3551 | 0.1772 | | No log | 2.2667 | 34 | 0.1847 | 0.3649 | 0.1845 | | No log | 2.4 | 36 | 0.1928 | 0.3552 | 0.1927 | | No log | 2.5333 | 38 | 0.2205 | 0.3465 | 0.2205 | | No log | 2.6667 | 40 | 0.2304 | 0.3465 | 0.2304 | | No log | 2.8 | 42 | 0.2178 | 0.3465 | 0.2178 | | No log | 2.9333 | 44 | 0.1968 | 0.3522 | 0.1966 | | No log | 3.0667 | 46 | 0.1970 | 0.3289 | 0.1968 | | No log | 3.2 | 48 | 0.1984 | 0.3289 | 0.1982 | | No log | 3.3333 | 50 | 0.2024 | 0.3501 | 0.2023 | | No log | 3.4667 | 52 | 0.2031 | 0.3422 | 0.2030 | | No log | 3.6 | 54 | 0.2013 | 0.3476 | 0.2012 | | No log | 3.7333 | 56 | 0.1980 | 0.3476 | 0.1979 | | No log | 3.8667 | 58 | 0.1927 | 0.3486 | 0.1926 | | No log | 4.0 | 60 | 0.1951 | 0.3486 | 0.1950 | | No log | 4.1333 | 62 | 0.1931 | 0.3476 | 0.1930 | | No log | 4.2667 | 64 | 0.2079 | 0.3444 | 0.2079 | | No log | 4.4 | 66 | 0.2177 | 0.3494 | 0.2177 | | No log | 4.5333 | 68 | 0.2017 | 0.3566 | 0.2017 | | No log | 4.6667 | 70 | 0.1927 | 0.3810 | 0.1925 | | No log | 4.8 | 72 | 0.1963 | 0.3658 | 0.1962 | | No log | 4.9333 | 74 | 0.2050 | 0.3496 | 0.2049 | | No log | 5.0667 | 76 | 0.2221 | 0.3444 | 0.2220 | | No log | 5.2 | 78 | 0.2272 | 0.3444 | 0.2270 | | No log | 5.3333 | 80 | 0.2127 | 0.3434 | 0.2125 | | No log | 5.4667 | 82 | 0.2034 | 0.3444 | 0.2032 | | No log | 5.6 | 84 | 0.2081 | 0.3522 | 0.2079 | | No log | 5.7333 | 86 | 0.2204 | 0.3650 | 0.2203 | | No log | 5.8667 | 88 | 0.2256 | 0.3650 | 0.2255 | | No log | 6.0 | 90 | 0.2294 | 0.3627 | 0.2292 | | No log | 6.1333 | 92 | 0.2113 | 0.3465 | 0.2111 | | No log | 6.2667 | 94 | 0.2021 | 0.3434 | 0.2019 | | No log | 6.4 | 96 | 0.2047 | 0.3434 | 0.2045 | | No log | 6.5333 | 98 | 0.2068 | 0.3423 | 0.2066 | | No log | 6.6667 | 100 | 0.2100 | 0.3434 | 0.2099 | | No log | 6.8 | 102 | 0.2176 | 0.3475 | 0.2175 | | No log | 6.9333 | 104 | 0.2219 | 0.3585 | 0.2218 | | No log | 7.0667 | 106 | 0.2202 | 0.3585 | 0.2201 | | No log | 7.2 | 108 | 0.2172 | 0.3585 | 0.2171 | | No log | 7.3333 | 110 | 0.2141 | 0.3576 | 0.2140 | | No log | 7.4667 | 112 | 0.2088 | 0.3525 | 0.2087 | | No log | 7.6 | 114 | 0.2032 | 0.3496 | 0.2031 | | No log | 7.7333 | 116 | 0.2002 | 0.3486 | 0.2000 | | No log | 7.8667 | 118 | 0.2006 | 0.3443 | 0.2004 | | No log | 8.0 | 120 | 0.2039 | 0.3486 | 0.2037 | | No log | 8.1333 | 122 | 0.2135 | 0.3496 | 0.2134 | | No log | 8.2667 | 124 | 0.2247 | 0.3576 | 0.2246 | | No log | 8.4 | 126 | 0.2292 | 0.3576 | 0.2291 | | No log | 8.5333 | 128 | 0.2321 | 0.3576 | 0.2319 | | No log | 8.6667 | 130 | 0.2280 | 0.3465 | 0.2279 | | No log | 8.8 | 132 | 0.2199 | 0.3506 | 0.2197 | | No log | 8.9333 | 134 | 0.2153 | 0.3496 | 0.2151 | | No log | 9.0667 | 136 | 0.2138 | 0.3486 | 0.2136 | | No log | 9.2 | 138 | 0.2163 | 0.3486 | 0.2161 | | No log | 9.3333 | 140 | 0.2180 | 0.3486 | 0.2178 | | No log | 9.4667 | 142 | 0.2205 | 0.3486 | 0.2204 | | No log | 9.6 | 144 | 0.2225 | 0.3496 | 0.2223 | | No log | 9.7333 | 146 | 0.2241 | 0.3506 | 0.2240 | | No log | 9.8667 | 148 | 0.2254 | 0.3506 | 0.2253 | | No log | 10.0 | 150 | 0.2256 | 0.3506 | 0.2255 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1