Update link
Browse files
README.md
CHANGED
@@ -18,6 +18,8 @@ tags:
|
|
18 |
|
19 |
## Evaluation
|
20 |
|
|
|
|
|
21 |
### Model Performance
|
22 |
|
23 |
| Model | P (%) | R (%) | F1 (%) |
|
@@ -29,4 +31,4 @@ tags:
|
|
29 |
| AdminBERT-NER 4G | 78.47 | 80.35 | 79.26 |
|
30 |
| AdminBERT-NER 16GB | 78.79 | 82.07 | 80.11 |
|
31 |
|
32 |
-
To evaluate each model, we performed five runs and averaged the results on the test set of Adminset-NER.
|
|
|
18 |
|
19 |
## Evaluation
|
20 |
|
21 |
+
Regarding the fact that at date, there was no evaluation coprus available compose of French administrative, we decide to create our own on the NER (Named Entity Recognition) task.
|
22 |
+
|
23 |
### Model Performance
|
24 |
|
25 |
| Model | P (%) | R (%) | F1 (%) |
|
|
|
31 |
| AdminBERT-NER 4G | 78.47 | 80.35 | 79.26 |
|
32 |
| AdminBERT-NER 16GB | 78.79 | 82.07 | 80.11 |
|
33 |
|
34 |
+
To evaluate each model, we performed five runs and averaged the results on the test set of [Adminset-NER](https://huggingface.co/datasets/taln-ls2n/Adminset-NER).
|