orpo-phi2 / README.md
Amu's picture
init model
97dae7d
metadata
language:
  - en
license: apache-2.0
tags:
  - generated_from_trainer
base_model: microsoft/phi-2
pipeline_tag: text-generation

outputs

This model is a fine-tuned version of microsoft/phi-2 using trl on ultrafeedback dataset.

What's new

A test for orpo method using trl library.

How to reproduce

accelerate launch --config_file=/path/to/trl/examples/accelerate_configs/deepspeed_zero2.yaml \
    --num_processes 8 \
    /path/to/dpo/trl/examples/scripts/orpo.py \
    --model_name_or_path="microsoft/phi-2" \
    --per_device_train_batch_size 1 \
    --max_steps 20000 \
    --learning_rate 8e-5 \
    --gradient_accumulation_steps 1 \
    --logging_steps 20 \
    --eval_steps 2000 \
    --output_dir="orpo-phi2" \
    --warmup_steps 150 \
    --bf16 \
    --logging_first_step \
    --no_remove_unused_columns \
    --dataset HuggingFaceH4/ultrafeedback_binarized