This model is based on Mixtral-8x7b.

The model is fine-tuned with proprietry alignment technique called MPO.

Model was trained on 8x A100s using LoRA.

Prompt format: This model uses ChatML prompt format.

<|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant

I'll provide detailed article on training and data in near future.

Downloads last month
785
Safetensors
Model size
46.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for PSanni/MPOMixtral-8x7B-Instruct-v0.1

Quantizations
1 model