Add chat_template from allenai/tulu-2-dpo-70b to tokenizer_config.json
#4
by
gardner
- opened
Add chat_template
from allenai/tulu-2-dpo-70b/tokenizer_config.json
This change includes a chat_template
in tokenizer_config.json
. For more information please see Templates for Chat Models
.
To demonstrate the outcome of this change please see before and after:
Before
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TencentARC/LLaMA-Pro-8B-Instruct", legacy=False)
chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
{"role": "assistant", "content": "Great, please let me know if I can help."},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
output:
$ python3 main.py
No chat template is defined for this tokenizer - using the default template for the LlamaTokenizerFast class. If the default is not appropriate for your model, please set `tokenizer.chat_template` to an appropriate template. See https://huggingface.co/docs/transformers/main/chat_templating for more information.
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
Hello, how are you? [/INST] I'm doing great. How can I help you today? </s><s>[INST] I'd like to show off how chat templating works! [/INST] Great, please let me know if I can help. </s>
After
If we modify the tokenizer to use a chat_template
, we can see the difference:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TencentARC/LLaMA-Pro-8B-Instruct", legacy=False)
+ tokenizer.chat_template = "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"
chat = [
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
{"role": "assistant", "content": "Great, please let me know if I can help."},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
Which outputs:
$ python3 main.py
<|user|>
Hello, how are you?
<|assistant|>
I'm doing great. How can I help you today?</s>
<|user|>
I'd like to show off how chat templating works!
<|assistant|>
Great, please let me know if I can help.</s>
Please see TencentARC/LLaMA-Pro-8B-Instruct/discussions/3.
Thank you for opening this, beat me to it!
Super nitpicky: the keys in tokenizer_config.json
are otherwise in alphabetical order π
Thanks for this PR! I really appreciate it!
WuChengyue
changed pull request status to
merged