Text Generation
Transformers
PyTorch
TensorBoard
Safetensors
bloom
Eval Results
text-generation-inference
Inference Endpoints

Bloom's tokenizer vocab is messy code

#216
by ShaneSue - opened

image.png
anyone know how to fix it?

BigScience Workshop org

The tokenizer operates on bytes, so it's normal for the tokens to contain weird characters. If your goal is to manually inspect individual tokens you can convert them back to strings using the tokenizer's convert_tokens_to_string method.

I got it, thanks a lot

christopher changed discussion status to closed

Sign up or log in to comment