---
license: mit
language:
- it
---
--------------------------------------------------------------------------------------------------
Model: RoBERTa
Lang: IT
--------------------------------------------------------------------------------------------------
Model description
This is a RoBERTa [1] model for the italian language, obtained using XLM-RoBERTa [2] ([xlm-roberta-base](https://huggingface.co/xlm-roberta-base)) as a starting point and focusing it on the italian language by modifying the embedding layer
(as in [3], computing document-level frequencies over the Wikipedia dataset)
The resulting model has 125M parameters, a vocabulary of 50.670 tokens, and a size of ~500 MB.
Quick usage
```python
from transformers import RobertaTokenizerFast, RobertaModel
tokenizer = RobertaTokenizerFast.from_pretrained("osiria/roberta-base-italian")
model = RobertaModel.from_pretrained("osiria/roberta-base-italian")
```
References
[1] https://arxiv.org/abs/1907.11692
[2] https://arxiv.org/abs/1911.02116
[3] https://arxiv.org/abs/2010.05609
License
The model is released under MIT license