File size: 583 Bytes
15eccc3 d5f0cf4 0ce72c0 d5f0cf4 01d678a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
---
{}
---
# This is only a tokenizer.
- This tokenizer is a PreTrainedTokenizerFast which is trained on raygx/Nepali-Extended-Corpus datasets.
- This tokenizer is trained from scratch using Tokenizers library.
- This tokenizer uses
- Model: Tokenizer(WordPiece(unk_token="[UNK]"))
- Normalizer: normalizers.Sequence([NFD(),Strip()])
- Pre-processor: pre_tokenizers.Sequence([Whitespace(),Digits(individual_digits=True), Punctuation()])
- Post-processor: BertProcessing
[Code is available here.](https://www.kaggle.com/code/reganmaharjan/nepali-tokenizers-4-transformers/) |