ncbi/ncbi_disease
Updated • 4.03k • 52
How to use Pontonkid/finetuned-xlm-roberta-base-NER with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="Pontonkid/finetuned-xlm-roberta-base-NER") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("Pontonkid/finetuned-xlm-roberta-base-NER")
model = AutoModelForTokenClassification.from_pretrained("Pontonkid/finetuned-xlm-roberta-base-NER")This model is a fine-tuned version of xlm-roberta-base on the ncbi_disease dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| No log | 1.0 | 340 | 0.0809 | 0.6839 | 0.8698 | 0.7657 | 0.9723 |
| 0.1092 | 2.0 | 680 | 0.0589 | 0.7974 | 0.8448 | 0.8204 | 0.9805 |
Base model
FacebookAI/xlm-roberta-base