|
--- |
|
{} |
|
--- |
|
# This is only a tokenizer. |
|
|
|
- This tokenizer is a PreTrainedTokenizerFast which is trained on raygx/Nepali-Extended-Corpus datasets. |
|
- This tokenizer is trained from scratch using Tokenizers library. |
|
- This tokenizer uses |
|
- Model: Tokenizer(WordPiece(unk_token="[UNK]")) |
|
- Normalizer: normalizers.Sequence([NFD(),Strip()]) |
|
- Pre-processor: pre_tokenizers.Sequence([Whitespace(),Digits(individual_digits=True), Punctuation()]) |
|
- Post-processor: BertProcessing |
|
|
|
[Code is available here.](https://www.kaggle.com/code/reganmaharjan/nepali-tokenizers-4-transformers/) |