Update tokenizer_config.json
#1
by
patrickvonplaten
- opened
This is important to prevent a nasty bug at the moment. Without this fix one could load the wrong tokenizer (fast gpt2) when doing tokenizer = AutoTokenizer.from_pretrained('your/model/id'). Currently fast gpt2 doesn't work correctly with OPT and one should only use the slow one see: https://huggingface.co/facebook/opt-6.7b#how-to-use
anas-awadalla
changed pull request status to
merged