metadata
license: mit
base_model: xlm-roberta-base
tags:
- silvanus
datasets:
- id_nergrit_corpus
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-base-ner-silvanus
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: id_nergrit_corpus
type: id_nergrit_corpus
config: ner
split: validation
args: ner
metrics:
- name: Precision
type: precision
value: 0.918918918918919
- name: Recall
type: recall
value: 0.9272727272727272
- name: F1
type: f1
value: 0.9230769230769231
- name: Accuracy
type: accuracy
value: 0.9858518778229216
language:
- id
- en
- es
- it
- sk
pipeline_tag: token-classification
widget:
- text: >-
Kebakaran hutan dan lahan terus terjadi dan semakin meluas di Kota
Palangkaraya, Kalimantan Tengah (Kalteng) pada hari Rabu, 15 Nopember 2023
20.00 WIB. Bahkan kobaran api mulai membakar pondok warga dan mendekati
permukiman. BZK #RCTINews #SeputariNews #News #Karhutla #KebakaranHutan
#HutanKalimantan #SILVANUS_Italian_Pilot_Testing
example_title: Indonesia
- text: >-
Wildfire rages for a second day in Evia destroying a Natura 2000 protected
pine forest. - 5:51 PM Aug 14, 2019
example_title: English
- text: >-
Incendio forestal obliga a la evacuación de hasta 850 personas cerca del
pueblo de Montichelvo en Valencia.
example_title: Spanish
- text: >-
Incendi boschivi nell'est del Paese: 2 morti e oltre 50 case distrutte
nello stato del Queensland.
example_title: Italian
- text: >-
Lesné požiare na Sicílii si vyžiadali dva ľudské životy a evakuáciu hotela
http://dlvr.it/SwW3sC - 23. septembra 2023 20:57
example_title: Slovak
xlm-roberta-base-ner-silvanus
This model is a fine-tuned version of xlm-roberta-base on the id_nergrit_corpus dataset. It achieves the following results on the evaluation set:
- Loss: 0.0595
- Precision: 0.9189
- Recall: 0.9273
- F1: 0.9231
- Accuracy: 0.9859
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.1394 | 1.0 | 827 | 0.0559 | 0.8808 | 0.9257 | 0.9027 | 0.9842 |
0.0468 | 2.0 | 1654 | 0.0575 | 0.9107 | 0.9190 | 0.9148 | 0.9849 |
0.0279 | 3.0 | 2481 | 0.0595 | 0.9189 | 0.9273 | 0.9231 | 0.9859 |
Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1