File size: 909 Bytes
01ff5b2 d8d132c 01ff5b2 79cb744 01ff5b2 9d9a7c8 01ff5b2 9d9a7c8 01ff5b2 9d9a7c8 01ff5b2 9d9a7c8 79cb744 01ff5b2 9d9a7c8 01ff5b2 9d9a7c8 79cb744 01ff5b2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
language: id
tags:
- roberta-base-indonesia
license: mit
datasets:
- wikipedia
widget:
- text: "Gajah <mask> sedang makan di kebun binatang."
---
# Indonesian RoBERTa Base
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "akahana/roberta-base-indonesia"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Gajah <mask> sedang makan di kebun binatang.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "akahana/roberta-base-indonesia"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Gajah <mask> sedang makan di kebun binatang."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
``` |