KoichiYasuoka's picture
initial release
397ce9f
|
raw
history blame
701 Bytes
metadata
language:
  - bo
tags:
  - tibetan
  - masked-lm
license: cc-by-sa-4.0
pipeline_tag: fill-mask
mask_token: '[MASK]'
datasets:
  - UTibetNLP/tibetan_news_classification

roberta-base-tibetan

Model Description

This is a RoBERTa model pre-trained on Tibetan texts. NVIDIA A100-SXM4-40GB took 40 hours 44 minutes for training. You can fine-tune roberta-base-tibetan for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-tibetan")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-tibetan")