|
---
|
|
language:
|
|
- tr
|
|
tags:
|
|
- roberta
|
|
license: cc-by-nc-sa-4.0
|
|
datasets:
|
|
- oscar
|
|
---
|
|
|
|
# RoBERTa Turkish medium WordPiece 16k (uncased)
|
|
|
|
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
|
|
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
|
|
|
|
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is WordPiece. Vocabulary size is 16.7k.
|
|
|
|
The details can be found at this paper:
|
|
https://arxiv.org/...
|
|
|
|
### BibTeX entry and citation info
|
|
```bibtex
|
|
@article{}
|
|
```
|
|
|