CrazyKrow commited on
Commit
5446d05
1 Parent(s): b531a4a

Upload the tokenizer file on all quants.

Browse files

The tokenizer is incompatible with anything that makes the model continue an existing generation done by the model itself. It was adding the mistral EOS token even when changing instruct templates.

All I had to do is deleting some stuff referring to the EOS token " < / s> " on the tokenizer file, just below where it says:

"post_processor": {

"type": "TemplateProcessing",

It was making it not work with twinbook, oobabooga's notebook, the "Start reply with" option or with the continue button.

Files changed (1) hide show
  1. tokenizer.json +0 -0
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff