MiquMaid v2 2x70 DPO

Check out our blogpost about this model series Here! - Join our Discord server Here!

[V2-70B - V2-70B-DPO - V2-2x70B - V2-2x70B-DPO]

This model uses the Alpaca prompting format

Then, we have done a MoE, made of MiquMaid-v2-70B-DPO and Miqu-70B-DPO base, making the model using the finetune AND the base model for each token, working together.

The two model have been trained on DPO for uncensoring, more info on Miqu-70B-DPO here

We have seen a significant improvement, so we decided to share that, even if the model is very big.

Credits:

  • Undi
  • IkariDev

Description

This repo contains FP16 files of MiquMaid-v2-2x70B-DPO.

Switch: FP16 - GGUF

Training data used:

DPO training data used:

Custom format:

### Instruction:
{system prompt}

### Input:
{input}

### Response:
{reply}

Others

Undi: If you want to support us, you can here.

IkariDev: Visit my retro/neocities style website please kek

Downloads last month
53
Safetensors
Model size
125B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for NeverSleep/MiquMaid-v2-2x70B-DPO

Quantizations
2 models

Space using NeverSleep/MiquMaid-v2-2x70B-DPO 1

Collection including NeverSleep/MiquMaid-v2-2x70B-DPO