File size: 767 Bytes
397ce9f 2445246 397ce9f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
language:
- "bo"
tags:
- "tibetan"
- "masked-lm"
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
datasets:
- UTibetNLP/tibetan_news_classification
---
# roberta-base-tibetan
## Model Description
This is a RoBERTa model pre-trained on Tibetan texts. NVIDIA A100-SXM4-40GB took 40 hours 44 minutes for training. You can fine-tune `roberta-base-tibetan` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-tibetan-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-tibetan")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-tibetan")
```
|