metadata
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-TIME
'8': I-TIME
'9': B-TTL
'10': I-TTL
splits:
- name: train
num_bytes: 2138256
num_examples: 3465
download_size: 546138
dataset_size: 2138256
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- token-classification
language:
- am
size_categories:
- 1K<n<10K
Amharic Named Entity Recognition Dataset
This dataset can be used to train models for Named Entity Recognition.
Dataset Source
https://github.com/uhh-lt/ethiopicmodels/blob/master/am/data/NER/train.txt
Finetuned Models
The following transformer models were finetuned using this dataset. The reported precision, recall, and f1 metrics are macro averages.
Model | Size (# params) | Precision | Recall | F1 |
---|---|---|---|---|
bert-medium-amharic | 40.5M | 0.64 | 0.73 | 0.68 |
bert-small-amharic | 27.8M | 0.64 | 0.72 | 0.68 |
bert-mini-amharic | 10.7M | 0.60 | 0.67 | 0.64 |
bert-tiny-amharic | 4.18M | 0.50 | 0.59 | 0.54 |
xlm-roberta-base | 279M | 0.69 | 0.79 | 0.73 |
am-roberta | 443M | 0.67 | 0.72 | 0.69 |
Code
In this repository, you can find notebooks for finetuning each of the above models using this dataset