metadata
base_model: FacebookAI/xlm-roberta-large-finetuned-conll03-english
tags:
- generated_from_trainer
datasets:
- conll2002
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xml-roberta-large-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2002
type: conll2002
config: es
split: validation
args: es
metrics:
- name: Precision
type: precision
value: 0.880600409370025
- name: Recall
type: recall
value: 0.8897058823529411
- name: F1
type: f1
value: 0.8851297291118985
- name: Accuracy
type: accuracy
value: 0.9806463992982264
xml-roberta-large-finetuned-ner
Este es modelo resultado de un finetuning de FacebookAI/xlm-roberta-large-finetuned-conll03-english sobre el conll2002 dataset. Los siguientes son los resultados sobre el conjunto de evaluación:
- Loss: 0.1364
- Precision: 0.8806
- Recall: 0.8897
- F1: 0.8851
- Accuracy: 0.9806
Model description
Este es el modelo más grande de roberta FacebookAI/xlm-roberta-large-finetuned-conll03-english- Este modelo fue ajustado usando el framework Kaggle [https://www.kaggle.com/settings]. Para realizar el preentrenamiento del modelo se tuvo que crear un directorio temporal en Kaggle con el fin de almacenar de manera temoporal el modelo que pesa alrededor de 35 Gz.
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.0743 | 1.0 | 2081 | 0.1131 | 0.8385 | 0.8587 | 0.8485 | 0.9771 |
0.049 | 2.0 | 4162 | 0.1429 | 0.8492 | 0.8564 | 0.8528 | 0.9756 |
0.031 | 3.0 | 6243 | 0.1298 | 0.8758 | 0.8817 | 0.8787 | 0.9800 |
0.0185 | 4.0 | 8324 | 0.1279 | 0.8827 | 0.8890 | 0.8859 | 0.9808 |
0.0125 | 5.0 | 10405 | 0.1364 | 0.8806 | 0.8897 | 0.8851 | 0.9806 |
Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2
- Datasets 2.19.1
- Tokenizers 0.19.1