KoichiYasuoka/roberta-base-tibetan
Fill-Mask
•
Updated
•
18
Error code: FeaturesError Exception: UnicodeDecodeError Message: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 162, in compute_first_rows_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2206, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1230, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1379, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1039, in __iter__ yield from islice(self.ex_iterable, self.n) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 281, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/text/text.py", line 89, in _generate_tables batch = f.read(self.config.chunksize) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 333, in read_with_retries out = read(*args, **kwargs) File "/usr/local/lib/python3.9/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This is the open-sourced training corpus of our Tibetan BERT Model.
Please cite our paper if you use this training corpus or the model:
@inproceedings{10.1145/3548608.3559255,
author = {Zhang, Jiangyan and Kazhuo, Deji and Gadeng, Luosang and Trashi, Nyima and Qun, Nuo},
title = {Research and Application of Tibetan Pre-Training Language Model Based on BERT},
year = {2022},
isbn = {9781450397179},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3548608.3559255},
doi = {10.1145/3548608.3559255},
abstract = {In recent years, pre-training language models have been widely used in the field of natural language processing, but the research on Tibetan pre-training language models is still in the exploratory stage. To promote the further development of Tibetan natural language processing and effectively solve the problem of the scarcity of Tibetan annotation data sets, the article studies the Tibetan pre-training language model based on BERT. First, given the characteristics of the Tibetan language, we constructed a data set for the BERT pre-training language model and downstream text classification tasks. Secondly, construct a small-scale Tibetan BERT pre-training language model to train it. Finally, the performance of the model was verified through the downstream task of Tibetan text classification, and an accuracy rate of 86\% was achieved on the task of text classification. Experiments show that the model we built has a significant effect on the task of Tibetan text classification.},
booktitle = {Proceedings of the 2022 2nd International Conference on Control and Intelligent Robotics},
pages = {519–524},
numpages = {6},
location = {Nanjing, China},
series = {ICCIR '22}
}