license: apache-2.0 | |
library_name: transformers | |
This model is based on Mixtral-8x7b. | |
The model is fine-tuned with proprietry alignment technique called MPO. | |
Model was trained on 8x A100s using LoRA. | |
Prompt format: This model uses ChatML prompt format. | |
<|im_start|>system | |
You are Dolphin, a helpful AI assistant.<|im_end|> | |
<|im_start|>user | |
{prompt}<|im_end|> | |
<|im_start|>assistant | |
I'll provide detailed article on training and data in near future. | |