julian-schelb
commited on
Commit
•
a380e1d
1
Parent(s):
818382f
Update README.md
Browse files
README.md
CHANGED
@@ -53,7 +53,7 @@ metrics:
|
|
53 |
|
54 |
# RoBERTa for Multilingual Named Entity Recognition
|
55 |
|
56 |
-
## Model
|
57 |
|
58 |
This model detects entities by classifying every token according to the IOB format:
|
59 |
|
@@ -61,9 +61,9 @@ This model detects entities by classifying every token according to the IOB form
|
|
61 |
['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
|
62 |
```
|
63 |
|
64 |
-
You can find the code in [this](https://github.com/julianschelb/roberta-ner-multilingual)
|
65 |
|
66 |
-
## Training
|
67 |
|
68 |
This model was fine-tuned on a portion of the [wikiann](https://huggingface.co/datasets/wikiann) dataset corresponding to the following languages:
|
69 |
|
@@ -79,7 +79,7 @@ This model was fine-tuned on a portion of the [wikiann](https://huggingface.co/d
|
|
79 |
|
80 |
The model was fine-tuned on 375.100 sentences in the training set, with a validation set of 173.100 examples. Performance metrics reported are based on additional 173.100 examples. The complete WikiANN dataset includes training examples for 282 languages and was constructed from Wikipedia. Training examples are extracted in an automated manner, exploiting entities mentioned in Wikipedia articles, often are formatted as hyperlinks to the source article. Provided NER tags are in the IOB2 format. Named entities are classified as location (LOC), person (PER), or organization (ORG).
|
81 |
|
82 |
-
## Evaluation
|
83 |
|
84 |
This model achieves the following results (meassured using the test split of the [wikiann](https://huggingface.co/datasets/wikiann) dataset):
|
85 |
|
@@ -143,7 +143,7 @@ More precisely, it was pretrained with the Masked language modeling (MLM) object
|
|
143 |
|
144 |
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
|
145 |
|
146 |
-
#### Limitations and
|
147 |
|
148 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
149 |
|
|
|
53 |
|
54 |
# RoBERTa for Multilingual Named Entity Recognition
|
55 |
|
56 |
+
## Model Description
|
57 |
|
58 |
This model detects entities by classifying every token according to the IOB format:
|
59 |
|
|
|
61 |
['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
|
62 |
```
|
63 |
|
64 |
+
You can find the code in [this](https://github.com/julianschelb/roberta-ner-multilingual) GitHub repository.
|
65 |
|
66 |
+
## Training Data
|
67 |
|
68 |
This model was fine-tuned on a portion of the [wikiann](https://huggingface.co/datasets/wikiann) dataset corresponding to the following languages:
|
69 |
|
|
|
79 |
|
80 |
The model was fine-tuned on 375.100 sentences in the training set, with a validation set of 173.100 examples. Performance metrics reported are based on additional 173.100 examples. The complete WikiANN dataset includes training examples for 282 languages and was constructed from Wikipedia. Training examples are extracted in an automated manner, exploiting entities mentioned in Wikipedia articles, often are formatted as hyperlinks to the source article. Provided NER tags are in the IOB2 format. Named entities are classified as location (LOC), person (PER), or organization (ORG).
|
81 |
|
82 |
+
## Evaluation Results
|
83 |
|
84 |
This model achieves the following results (meassured using the test split of the [wikiann](https://huggingface.co/datasets/wikiann) dataset):
|
85 |
|
|
|
143 |
|
144 |
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
|
145 |
|
146 |
+
#### Limitations and Bias
|
147 |
|
148 |
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
|
149 |
|