Michael Beukman commited on
Commit
bc21ea7
1 Parent(s): 383208d

Slightly improved model card

Browse files
Files changed (1) hide show
  1. README.md +17 -1
README.md CHANGED
@@ -68,8 +68,24 @@ In general, this model performed worse on the 'date' category compared to others
68
  Here are some performance details on this specific model, compared to others we trained.
69
  All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
70
 
 
71
 
72
- | Model Name | Staring point | Evaluation Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
74
  | [xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda) (This model) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | lug | 85.37 | 82.75 | 88.17 | 78.00 | 82.00 | 80.00 | 92.00 |
75
  | [xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | lug | 82.57 | 80.38 | 84.89 | 75.00 | 80.00 | 82.00 | 87.00 |
 
68
  Here are some performance details on this specific model, compared to others we trained.
69
  All of these metrics were calculated on the test set, and the seed was chosen that gave the best overall F1 score. The first three result columns are averaged over all categories, and the latter 4 provide performance broken down by category.
70
 
71
+ These models can predict the following label for a token ([source](https://huggingface.co/Davlan/xlm-roberta-large-masakhaner)):
72
 
73
+
74
+ Abbreviation|Description
75
+ -|-
76
+ O|Outside of a named entity
77
+ B-DATE |Beginning of a DATE entity right after another DATE entity
78
+ I-DATE |DATE entity
79
+ B-PER |Beginning of a person’s name right after another person’s name
80
+ I-PER |Person’s name
81
+ B-ORG |Beginning of an organisation right after another organisation
82
+ I-ORG |Organisation
83
+ B-LOC |Beginning of a location right after another location
84
+ I-LOC |Location
85
+
86
+
87
+
88
+ | Model Name | Staring point | Evaluation / Fine-tune Language | F1 | Precision | Recall | F1 (DATE) | F1 (LOC) | F1 (ORG) | F1 (PER) |
89
  | -------------------------------------------------- | -------------------- | -------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- | -------------- |
90
  | [xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-luganda-finetuned-ner-luganda) (This model) | [lug](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-luganda) | lug | 85.37 | 82.75 | 88.17 | 78.00 | 82.00 | 80.00 | 92.00 |
91
  | [xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda](https://huggingface.co/mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luganda) | [swa](https://huggingface.co/Davlan/xlm-roberta-base-finetuned-swahili) | lug | 82.57 | 80.38 | 84.89 | 75.00 | 80.00 | 82.00 | 87.00 |