readme: minor tweak
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ We use XLM-RoBERTa Large as backbone language model and the following hyper-para
|
|
24 |
| Learning Rate | `5-06` |
|
25 |
| Max. Epochs | `10` |
|
26 |
|
27 |
-
Additionally, the [FLERT](https://arxiv.org/abs/2011.06993) is used for fine-tuning the model.
|
28 |
|
29 |
## Results
|
30 |
|
|
|
24 |
| Learning Rate | `5-06` |
|
25 |
| Max. Epochs | `10` |
|
26 |
|
27 |
+
Additionally, the [FLERT](https://arxiv.org/abs/2011.06993) approach is used for fine-tuning the model.
|
28 |
|
29 |
## Results
|
30 |
|