--- license: other language: - en --- # This model has some tokenization problems on its own (tokensurgery with a shotgun was applied), but was meant to be used in a merge. use at your own risk. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/umAoqWpJAhrpZbmzwiynH.png) ## Uses ChatML Formatting, Text completion [preset here](https://huggingface.co/Nitral-AI/Captain_BMO-Chatml-12B/tree/main/ST) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/pcZ9fqfkqiPjLirq4H2Xg.png) (Notes pulled from original card), [since the data is the same]: One off train most likely, this was done purely for internal testing purposes but seemed ok enough to release. I do not plan to offer any kind of extended support for using this model, so your mileage may vary depending on use and context size. - (Nemo 12B instruct as base) - 200k randomized subset of GU_instruct-Remastered-1.1, with a splash of 25k hathor/poppy sauce, slow cooked for 3 epochs on medium heat.