ctoraman's picture
model uploaded.
f9bafc3
|
raw
history blame
678 Bytes
---
language:
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---
# RoBERTa Turkish medium Word-level 16k (uncased)
Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased.
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned.
Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Word-level, which means text is split by white space. Vocabulary size is 16.7k.
The details can be found at this paper:
https://arxiv.org/...
### BibTeX entry and citation info
```bibtex
@article{}
```