stefan-it commited on
Commit
ce3098a
·
verified ·
1 Parent(s): ba8cf81

readme: add initial version

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: flair
5
+ pipeline_tag: token-classification
6
+ base_model: FacebookAI/xlm-roberta-large
7
+ ---
8
+
9
+ # Flair NER Model trained on CleanCoNLL Dataset
10
+
11
+ This (unofficial) Flair NER model was trained on the awesome [CleanCoNLL](https://aclanthology.org/2023.emnlp-main.533/) dataset.
12
+
13
+ The CleanCoNLL dataset was proposed by Susanna Rücker and Alan Akbik and introduces a corrected version of the classic CoNLL-03 dataset, with updated and more consistent NER labels.
14
+
15
+ ## Fine-Tuning
16
+
17
+ We use XLM-RoBERTa Large as backbone language model and the following hyper-parameters for fine-tuning:
18
+
19
+ | Hyper-Parameter | Value |
20
+ |:--------------- |:-------|
21
+ | Batch Size | `4` |
22
+ | Learning Rate | `5-06` |
23
+ | Max. Epochs | `10` |
24
+
25
+ Additionally, the [FLERT](https://arxiv.org/abs/2011.06993) is used for fine-tuning the model.
26
+
27
+ ## Results
28
+
29
+ We report micro F1-Score on development (in brackets) and test set for five runs with different seeds:
30
+
31
+ | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Avg.
32
+ |:--------------- |:--------------- |:--------------- |:--------------- |:--------------- |:--------------- |
33
+ | (97.34) / 97.00 | (97.26) / 96.90 | (97.66) / 97.02 | (97.42) / 96.96 | (97.46) / 96.99 | (97.43) / 96.97 |
34
+
35
+ Rücker and Akbik report 96.98 on three different runs, so our results are very close to their reported performance!
36
+
37
+ # Flair Demo
38
+
39
+ The following snippet shows how to use the CleanCoNLL NER models with Flair:
40
+
41
+ ```python
42
+ from flair.data import Sentence
43
+ from flair.models import SequenceTagger
44
+
45
+ # load tagger
46
+ tagger = SequenceTagger.load("stefan-it/flair-clean-conll-5")
47
+
48
+ # make example sentence
49
+ sentence = Sentence("According to the BBC George Washington went to Washington.")
50
+
51
+ # predict NER tags
52
+ tagger.predict(sentence)
53
+
54
+ # print sentence
55
+ print(sentence)
56
+
57
+ # print predicted NER spans
58
+ print('The following NER tags are found:')
59
+ # iterate over entities and print
60
+ for entity in sentence.get_spans('ner'):
61
+ print(entity)
62
+ ```