--- language: - "bo" tags: - "tibetan" - "masked-lm" license: "cc-by-sa-4.0" pipeline_tag: "fill-mask" mask_token: "[MASK]" datasets: - UTibetNLP/tibetan_news_classification --- # roberta-base-tibetan ## Model Description This is a RoBERTa model pre-trained on Tibetan texts. NVIDIA A100-SXM4-40GB took 40 hours 44 minutes for training. You can fine-tune `roberta-base-tibetan` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-tibetan-upos), dependency-parsing, and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-tibetan") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-tibetan") ```