Different tokenizer silently being loaded based on `trust_remote_code`

#24
by DarkLight1337 - opened

The problem

I found that the tokenizer of Alibaba-NLP/gte-Qwen2-1.5B-instruct has different padding_side based on the trust_remote_code setting:

>>> from transformers import AutoTokenizer
>>> AutoTokenizer.from_pretrained("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=False)
Qwen2TokenizerFast(name_or_path='Alibaba-NLP/gte-Qwen2-1.5B-instruct', vocab_size=151643, model_max_length=32768, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|endoftext|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>']}, clean_up_tokenization_spaces=False),  added_tokens_decoder={
        151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
>>> AutoTokenizer.from_pretrained("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=True)
Qwen2TokenizerFast(name_or_path='Alibaba-NLP/gte-Qwen2-1.5B-instruct', vocab_size=151643, model_max_length=32768, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'eos_token': '<|endoftext|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>']}, clean_up_tokenization_spaces=False),  added_tokens_decoder={
        151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}

This setting significantly impacts the output of the embedding model, yet does not raise an error immediately. Users may accidentally omit trust_remote_code=Trueand get incorrect results without knowing it.

Suggestion

As discussed in https://github.com/huggingface/transformers/issues/34882, the correct way to fix this is to set padding_side='left' via tokenizer_config.json instead of hardcoding it in the custom tokenizer.

Sign up or log in to comment