xlm-roberta-large-cantemist
	
This model is a finetuned version of xlm-roberta-large for the cantemist dataset used in a benchmark in the paper A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks. The model has a F1 of 0.904
Please refer to the original publication for more information.
	
		
	
	
		Parameters used
	
	
		
| parameter | Value | 
		
| batch size | 16 | 
| learning rate | 2e05 | 
| classifier dropout | 0.1 | 
| warmup ratio | 0 | 
| warmup steps | 0 | 
| weight decay | 0 | 
| optimizer | AdamW | 
| epochs | 10 | 
| early stopping patience | 3 | 
	
 
	
		
	
	
		BibTeX entry and citation info
	
@article{10.1093/jamia/ocae054,
    author = {García Subies, Guillem and Barbero Jiménez, Álvaro and Martínez Fernández, Paloma},
    title = {A comparative analysis of Spanish Clinical encoder-based models on NER and classification tasks},
    journal = {Journal of the American Medical Informatics Association},
    volume = {31},
    number = {9},
    pages = {2137-2146},
    year = {2024},
    month = {03},
    issn = {1527-974X},
    doi = {10.1093/jamia/ocae054},
    url = {https://doi.org/10.1093/jamia/ocae054},
}