File size: 3,343 Bytes
8518b3e 912abc2 8518b3e 912abc2 8518b3e 912abc2 8518b3e 912abc2 8518b3e 912abc2 8518b3e 912abc2 8518b3e 912abc2 8518b3e 912abc2 8518b3e 912abc2 8518b3e 912abc2 8518b3e 912abc2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: xlm-ate-nobi-en
results: []
language:
- en
---
# XLMR Token Classifier for Term Extraction
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) for cross-domain term extraction tasks.
## Model description
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) for token classification, specifically designed to identify and classify terms within text sequences. The model assigns labels such as B-Term, I-Term, BN-Term, IN-Term, and O to individual tokens, allowing for the extraction of meaningful terms from the text.
## Intended uses & limitations
The model is intended for term extraction tasks. It can be applied in domains like:
- Named Entity Recognition (NER)
- Information Extraction
## How to use
Here's a quick example of how to use the model with the Hugging Face `transformers` library:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("tthhanh/xlm-ate-nobi-en-nes")
model = AutoModelForTokenClassification.from_pretrained("tthhanh/xlm-ate-nobi-en-nes")
# Create a pipeline for token classification
nlp = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
# Example text
text = "Treatment of anemia in patients with heart disease : a clinical practice guideline from the American College of Physicians ."
# Get predictions
predictions = nlp(text)
# Print predictions
for prediction in predictions:
print(prediction)
```
## Training and evaluation data
We fine-tuned the English version of the ACTER dataset where Named Entities are included in the gold standard. We trained on the Corruption and Wind Energy domain, validated on the Equitation domain, and tested on the Heart Failure domain.
## Training procedure
The following hyperparameters were used during training:
```
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
```
Framework versions:
```
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
```
## Evaluation
We evaluate the performance of the ATE systems by comparing the candidate list extracted from the test set with the manually annotated gold standard term list for that specific test set. We use exact string matching to compare the retrieved terms to the ones in the gold standard and calculate Precision (P), Recall (R), and F1-score (F1).
The results are reported in [Can cross-domain term extraction benefit from cross-lingual transfer and nested term labeling?](https://link.springer.com/article/10.1007/s10994-023-06506-7#Sec12).
## Citation
If you use this model in your research or application, please cite it as follows:
```
@inproceedings{tran2022can,
title={Can cross-domain term extraction benefit from cross-lingual transfer?},
author={Tran, Hanh Thi Hong and Martinc, Matej and Doucet, Antoine and Pollak, Senja},
booktitle={International Conference on Discovery Science},
pages={363--378},
year={2022},
organization={Springer}
}
``` |