Arabic NER
Collection
2 items
•
Updated
This is a SpanMarker model trained on the wikiann dataset that can be used for Named Entity Recognition. This SpanMarker model uses xlm-roberta-base as the underlying encoder.
Label | Examples |
---|---|
LOC | "شور بلاغ ( مقاطعة غرمي )", "دهنو ( تایباد )", "أقاليم ما وراء البحار" |
ORG | "الحزب الاشتراكي", "نادي باسوش دي فيريرا", "دايو ( شركة )" |
PER | "فرنسوا ميتيران،", "ديفيد نالبانديان", "حكم ( كرة قدم )" |
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Run inference
entities = model.predict("موطنها بلاد الشام تركيا.")
You can finetune this model on your own dataset.
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span_marker_model_id-finetuned")
Training set | Min | Median | Max |
---|---|---|---|
Sentence length | 3 | 6.4592 | 63 |
Entities per sentence | 1 | 1.1251 | 13 |
Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
---|---|---|---|---|---|---|
0.1989 | 500 | 0.1735 | 0.2667 | 0.0011 | 0.0021 | 0.4103 |
0.3979 | 1000 | 0.0808 | 0.7283 | 0.5314 | 0.6145 | 0.7716 |
0.5968 | 1500 | 0.0595 | 0.7876 | 0.6872 | 0.7340 | 0.8546 |
0.7957 | 2000 | 0.0532 | 0.8148 | 0.7600 | 0.7865 | 0.8823 |
0.9946 | 2500 | 0.0478 | 0.8485 | 0.8028 | 0.8250 | 0.9085 |
1.1936 | 3000 | 0.0419 | 0.8586 | 0.8084 | 0.8327 | 0.9101 |
1.3925 | 3500 | 0.0390 | 0.8628 | 0.8367 | 0.8495 | 0.9237 |
1.5914 | 4000 | 0.0456 | 0.8559 | 0.8299 | 0.8427 | 0.9231 |
1.7903 | 4500 | 0.0375 | 0.8682 | 0.8469 | 0.8574 | 0.9282 |
1.9893 | 5000 | 0.0323 | 0.8821 | 0.8635 | 0.8727 | 0.9348 |
2.1882 | 5500 | 0.0346 | 0.8781 | 0.8632 | 0.8706 | 0.9346 |
2.3871 | 6000 | 0.0318 | 0.8953 | 0.8523 | 0.8733 | 0.9345 |
2.5860 | 6500 | 0.0311 | 0.8861 | 0.8691 | 0.8775 | 0.9373 |
2.7850 | 7000 | 0.0323 | 0.89 | 0.8689 | 0.8793 | 0.9383 |
2.9839 | 7500 | 0.0310 | 0.8892 | 0.8780 | 0.8836 | 0.9419 |
3.1828 | 8000 | 0.0320 | 0.8817 | 0.8762 | 0.8790 | 0.9397 |
3.3817 | 8500 | 0.0291 | 0.8981 | 0.8778 | 0.8878 | 0.9438 |
3.5807 | 9000 | 0.0336 | 0.8972 | 0.8792 | 0.8881 | 0.9450 |
3.7796 | 9500 | 0.0323 | 0.8927 | 0.8757 | 0.8841 | 0.9424 |
3.9785 | 10000 | 0.0315 | 0.9028 | 0.8748 | 0.8886 | 0.9436 |
4.1774 | 10500 | 0.0330 | 0.8984 | 0.8855 | 0.8919 | 0.9458 |
4.3764 | 11000 | 0.0315 | 0.9023 | 0.8844 | 0.8933 | 0.9469 |
4.5753 | 11500 | 0.0305 | 0.9029 | 0.8886 | 0.8957 | 0.9486 |
4.6171 | 11605 | 0.0323 | 0.9078 | 0.8856 | 0.8965 | 0.9487 |
If you use this model, please cite:
@InProceedings{iahlt2023WikiANNArabicNER,
author = "iahlt",
title = "Arabic NER on WikiANN",
year = "2023",
publisher = "",
location = "",
}
Base model
FacebookAI/xlm-roberta-base