See axolotl config
axolotl version: 0.4.1
base_model: winglian/m12b-20240721-test010
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
chat_template: chatml
rl: simpo
rl_beta: 2.5
cpo_alpha: 0.05
simpo_gamma: 0.1
datasets:
- path: princeton-nlp/gemma2-ultrafeedback-armorm
type: chat_template.default
chat_template: chatml
field_messages: chosen
field_chosen: chosen
field_rejected: rejected
message_field_role: role
message_field_content: content
roles:
system:
- system
user:
- user
assistant:
- assistant
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./outputs/simpo-out
save_safetensors: true
save_only_model: true # fsdp seems to crap out saving the optimizer
sequence_len: 8192
sample_packing: false
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r: 256
lora_alpha: 256
lora_dropout: 0.1
lora_target_linear: true
lora_fan_in_fan_out:
# peft_use_rslora: true
wandb_project: romulus-12b
wandb_entity: oaaic
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 16
micro_batch_size: 1
num_epochs: 1
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 5.0e-7
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 25
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed: deepspeed_configs/zero3_bf16_cpuoffload_params.json
weight_decay: 0.0
fsdp:
fsdp_config:
outputs/simpo-out
This model is a fine-tuned version of winglian/m12b-20240721-test010 on an unknown dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 466
Training results
Framework versions
- Transformers 4.43.1
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 3,728
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for axolotl-ai-co/romulus-mistral-nemo-12b-simpo
Base model
winglian/m12b-20240721-test010