Edit model card

max_steps = 200
learning_rate = 1e-6
warmup_ratio = 0.1
dpo_beta = 0.4
use_rslora = True
use_loftq = False
lora_rank = 128
lora_alpha = 256
load_separate_reference_model = False
optim = "paged_lion_32bit"

Downloads last month
13
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for andysalerno/openchat-nectar-0.14

Finetuned
(30)
this model

Dataset used to train andysalerno/openchat-nectar-0.14