File size: 678 Bytes
3cdd049
f9bafc3
 
 
 
3cdd049
f9bafc3
 
3cdd049
f9bafc3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---

language: 
- tr
tags:
- roberta
license: cc-by-nc-sa-4.0
datasets:
- oscar
---


# RoBERTa Turkish medium Word-level 16k (uncased)

Pretrained model on Turkish language using a masked language modeling (MLM) objective. The model is uncased. 
The pretrained corpus is OSCAR's Turkish split, but it is further filtered and cleaned. 

Model architecture is similar to bert-medium (8 layers, 8 heads, and 512 hidden size). Tokenization algorithm is Word-level, which means text is split by white space. Vocabulary size is 16.7k. 

The details can be found at this paper: 
https://arxiv.org/...

### BibTeX entry and citation info
```bibtex

@article{}

```