arabert_cross_relevance_task5_fold5
This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1922
- Qwk: 0.3580
- Mse: 0.1919
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Qwk | Mse |
---|---|---|---|---|---|
No log | 0.1333 | 2 | 0.4644 | 0.1830 | 0.4641 |
No log | 0.2667 | 4 | 0.3061 | 0.4777 | 0.3055 |
No log | 0.4 | 6 | 0.3885 | 0.4275 | 0.3875 |
No log | 0.5333 | 8 | 0.2978 | 0.1841 | 0.2972 |
No log | 0.6667 | 10 | 0.3244 | 0.2335 | 0.3239 |
No log | 0.8 | 12 | 0.2777 | 0.2095 | 0.2771 |
No log | 0.9333 | 14 | 0.2770 | 0.1911 | 0.2763 |
No log | 1.0667 | 16 | 0.3068 | 0.3049 | 0.3060 |
No log | 1.2 | 18 | 0.2650 | 0.2682 | 0.2644 |
No log | 1.3333 | 20 | 0.2449 | 0.2365 | 0.2444 |
No log | 1.4667 | 22 | 0.2306 | 0.2508 | 0.2302 |
No log | 1.6 | 24 | 0.2334 | 0.2651 | 0.2331 |
No log | 1.7333 | 26 | 0.2396 | 0.2770 | 0.2393 |
No log | 1.8667 | 28 | 0.2278 | 0.2629 | 0.2275 |
No log | 2.0 | 30 | 0.2272 | 0.2894 | 0.2268 |
No log | 2.1333 | 32 | 0.2158 | 0.2717 | 0.2154 |
No log | 2.2667 | 34 | 0.1997 | 0.3126 | 0.1994 |
No log | 2.4 | 36 | 0.1975 | 0.3434 | 0.1973 |
No log | 2.5333 | 38 | 0.1903 | 0.3467 | 0.1901 |
No log | 2.6667 | 40 | 0.1879 | 0.3399 | 0.1876 |
No log | 2.8 | 42 | 0.2111 | 0.3205 | 0.2107 |
No log | 2.9333 | 44 | 0.2195 | 0.3003 | 0.2190 |
No log | 3.0667 | 46 | 0.2169 | 0.3061 | 0.2165 |
No log | 3.2 | 48 | 0.2026 | 0.3222 | 0.2023 |
No log | 3.3333 | 50 | 0.1934 | 0.3422 | 0.1932 |
No log | 3.4667 | 52 | 0.1933 | 0.3499 | 0.1932 |
No log | 3.6 | 54 | 0.1846 | 0.3467 | 0.1844 |
No log | 3.7333 | 56 | 0.1877 | 0.3716 | 0.1875 |
No log | 3.8667 | 58 | 0.1997 | 0.3725 | 0.1994 |
No log | 4.0 | 60 | 0.2019 | 0.3697 | 0.2015 |
No log | 4.1333 | 62 | 0.1966 | 0.3432 | 0.1964 |
No log | 4.2667 | 64 | 0.1874 | 0.3434 | 0.1873 |
No log | 4.4 | 66 | 0.1840 | 0.3478 | 0.1839 |
No log | 4.5333 | 68 | 0.1812 | 0.3489 | 0.1811 |
No log | 4.6667 | 70 | 0.1768 | 0.3640 | 0.1766 |
No log | 4.8 | 72 | 0.1794 | 0.3697 | 0.1791 |
No log | 4.9333 | 74 | 0.1855 | 0.3408 | 0.1852 |
No log | 5.0667 | 76 | 0.1947 | 0.3577 | 0.1944 |
No log | 5.2 | 78 | 0.1924 | 0.3596 | 0.1922 |
No log | 5.3333 | 80 | 0.1912 | 0.3500 | 0.1909 |
No log | 5.4667 | 82 | 0.1911 | 0.3544 | 0.1908 |
No log | 5.6 | 84 | 0.1909 | 0.3475 | 0.1907 |
No log | 5.7333 | 86 | 0.1914 | 0.3531 | 0.1912 |
No log | 5.8667 | 88 | 0.1937 | 0.3477 | 0.1935 |
No log | 6.0 | 90 | 0.1905 | 0.3498 | 0.1903 |
No log | 6.1333 | 92 | 0.1863 | 0.3478 | 0.1861 |
No log | 6.2667 | 94 | 0.1855 | 0.3587 | 0.1852 |
No log | 6.4 | 96 | 0.1840 | 0.3632 | 0.1837 |
No log | 6.5333 | 98 | 0.1808 | 0.3812 | 0.1805 |
No log | 6.6667 | 100 | 0.1789 | 0.3737 | 0.1786 |
No log | 6.8 | 102 | 0.1813 | 0.3848 | 0.1810 |
No log | 6.9333 | 104 | 0.1837 | 0.3797 | 0.1834 |
No log | 7.0667 | 106 | 0.1848 | 0.3579 | 0.1845 |
No log | 7.2 | 108 | 0.1883 | 0.3484 | 0.1880 |
No log | 7.3333 | 110 | 0.1935 | 0.3642 | 0.1933 |
No log | 7.4667 | 112 | 0.1935 | 0.3624 | 0.1932 |
No log | 7.6 | 114 | 0.1895 | 0.3453 | 0.1892 |
No log | 7.7333 | 116 | 0.1882 | 0.3475 | 0.1878 |
No log | 7.8667 | 118 | 0.1885 | 0.3565 | 0.1881 |
No log | 8.0 | 120 | 0.1890 | 0.3621 | 0.1886 |
No log | 8.1333 | 122 | 0.1888 | 0.3677 | 0.1884 |
No log | 8.2667 | 124 | 0.1882 | 0.3724 | 0.1878 |
No log | 8.4 | 126 | 0.1888 | 0.3733 | 0.1884 |
No log | 8.5333 | 128 | 0.1883 | 0.3686 | 0.1879 |
No log | 8.6667 | 130 | 0.1881 | 0.3658 | 0.1877 |
No log | 8.8 | 132 | 0.1886 | 0.3517 | 0.1883 |
No log | 8.9333 | 134 | 0.1903 | 0.3624 | 0.1900 |
No log | 9.0667 | 136 | 0.1928 | 0.3633 | 0.1925 |
No log | 9.2 | 138 | 0.1945 | 0.3650 | 0.1942 |
No log | 9.3333 | 140 | 0.1953 | 0.3650 | 0.1950 |
No log | 9.4667 | 142 | 0.1951 | 0.3650 | 0.1949 |
No log | 9.6 | 144 | 0.1943 | 0.3641 | 0.1940 |
No log | 9.7333 | 146 | 0.1931 | 0.3589 | 0.1928 |
No log | 9.8667 | 148 | 0.1925 | 0.3580 | 0.1922 |
No log | 10.0 | 150 | 0.1922 | 0.3580 | 0.1919 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for salbatarni/arabert_cross_relevance_task5_fold5
Base model
aubmindlab/bert-base-arabertv02