TRL documentation

ORPO Trainer

You are viewing v0.12.0 version. A newer version v0.12.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

ORPO Trainer

Overview

Odds Ratio Preference Optimization (ORPO) was introduced in ORPO: Monolithic Preference Optimization without Reference Model by Jiwoo Hong, Noah Lee, and James Thorne.

The abstract from the paper is the following:

While recent preference alignment algorithms for language models have demonstrated promising results, supervised fine-tuning (SFT) remains imperative for achieving successful convergence. In this paper, we study the crucial role of SFT within the context of preference alignment, emphasizing that a minor penalty for the disfavored generation style is sufficient for preference-aligned SFT. Building on this foundation, we introduce a straightforward and innovative reference model-free monolithic odds ratio preference optimization algorithm, ORPO, eliminating the necessity for an additional preference alignment phase. We demonstrate, both empirically and theoretically, that the odds ratio is a sensible choice for contrasting favored and disfavored styles during SFT across the diverse sizes from 125M to 7B. Specifically, fine-tuning Phi-2 (2.7B), Llama-2 (7B), and Mistral (7B) with ORPO on the UltraFeedback alone surpasses the performance of state-of-the-art language models with more than 7B and 13B parameters: achieving up to 12.20% on AlpacaEval_{2.0} (Figure 1), 66.19% on IFEval (instruction-level loose, Table 6), and 7.32 in MT-Bench (Figure 12). We release code and model checkpoints for Mistral-ORPO-alpha (7B) and Mistral-ORPO-beta (7B).

It studies the crucial role of SFT within the context of preference alignment. Using preference data the method posits that a minor penalty for the disfavored generation together with a strong adaption signal to the chosen response via a simple log odds ratio term appended to the NLL loss is sufficient for preference-aligned SFT.

Thus ORPO is a reference model-free preference optimization algorithm eliminating the necessity for an additional preference alignment phase thus saving compute and memory.

The official code can be found in xfactlab/orpo.

This post-training method was contributed by Kashif Rasul, Lewis Tunstall and Alvaro Bartolome.

Quick start

This example demonstrates how to train a model using the ORPO method. We use the Qwen 0.5B model as the base model. We use the preference data from the UltraFeedback dataset. You can view the data in the dataset here:

Below is the script to train the model:

# train_orpo.py
from datasets import load_dataset
from trl import ORPOConfig, ORPOTrainer
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
train_dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train")

training_args = ORPOConfig(output_dir="Qwen2-0.5B-ORPO", logging_steps=10)
trainer = ORPOTrainer(model=model, args=training_args, processing_class=tokenizer, train_dataset=train_dataset)
trainer.train()

Execute the script using the following command:

accelerate launch train_orpo.py

Distributed across 8 GPUs, the training takes approximately 30 minutes. You can verify the training progress by checking the reward graph. An increasing trend in the reward margin indicates that the model is improving and generating better responses over time.

To see how the trained model performs, you can use the TRL Chat CLI.

$ trl chat --model_name_or_path trl-lib/Qwen2-0.5B-ORPO
<quentin_gallouedec>:
What is the best programming language?

<trl-lib/Qwen2-0.5B-ORPO>:
It's challenging to determine the best programming language as no one language is perfect, as the complexity of a task and the type of project are significant factors. Some popular languages include Java, Python, JavaScript, and
C++. If you have specific needs or requirements for a specific project, it's important to choose the language that best suits those needs.                                                                                          

Here are some other factors to consider when choosing a programming language for a project:

 • Language proficiency: A good programming language is more likely to be easy to understand and use, and will allow developers to collaborate on projects more efficiently.                                     
 • Ease of use: There are tools and libraries available to make programming more accessible, so developers should choose a language that can help them get started easier.
 • Code readability: A clear and concise codebase should be easy to read and understand, especially when working with large projects.
 • Tool and framework support: There are numerous libraries available for Python, Java, and JavaScript, along with tools like IDEs and static code analysis tools.
 • Accessibility: Some languages and tools have features that make them more accessible to developers with disabilities, such as support for screen readers.
 • Version control: As your projects grow and complexity increases, version control tools can be beneficial for tracking changes.

Expected dataset type

ORPO requires a preference dataset. The ORPOTrainer supports both conversational and standard dataset format. When provided with a conversational dataset, the trainer will automatically apply the chat template to the dataset.

Although the ORPOTrainer supports both explicit and implicit prompts, we recommend using explicit prompts. If provided with an implicit prompt dataset, the trainer will automatically extract the prompt from the "chosen" and "rejected" columns. For more information, refer to the preference style section.

Example script

We provide an example script to train a model using the ORPO method. The script is available in examples/scripts/orpo.py

To test the ORPO script with the Qwen2 0.5B model on the UltraFeedback dataset, run the following command:

accelerate launch examples/scripts/orpo.py \
    --model_name_or_path Qwen/Qwen2-0.5B-Instruct \
    --dataset_name trl-lib/ultrafeedback_binarized \
    --num_train_epochs 1 \
    --logging_steps 25 \
    --output_dir Qwen2-0.5B-ORPO

Usage tips

For Mixture of Experts Models: Enabling the auxiliary loss

MOEs are the most efficient if the load is about equally distributed between experts.
To ensure that we train MOEs similarly during preference-tuning, it is beneficial to add the auxiliary loss from the load balancer to the final loss.

This option is enabled by setting output_router_logits=True in the model config (e.g. MixtralConfig).
To scale how much the auxiliary loss contributes to the total loss, use the hyperparameter router_aux_loss_coef=... (default: 0.001) in the model config.

Logged metrics

While training and evaluating we record the following reward metrics:

  • rewards/chosen: the mean log probabilities of the policy model for the chosen responses scaled by beta
  • rewards/rejected: the mean log probabilities of the policy model for the rejected responses scaled by beta
  • rewards/accuracies: mean of how often the chosen rewards are > than the corresponding rejected rewards
  • rewards/margins: the mean difference between the chosen and corresponding rejected rewards
  • log_odds_chosen: the mean log odds ratio of the chosen responses over the rejected responses
  • log_odds_ratio: the mean of the log(sigmoid(log_odds_chosen))
  • nll_loss: the mean negative log likelihood loss from the SFT part of the loss over chosen responses

ORPOTrainer

class trl.ORPOTrainer

< >

( model: Union = None args: Optional = None data_collator: Optional = None train_dataset: Optional = None eval_dataset: Union = None processing_class: Union = None model_init: Optional = None callbacks: Optional = None optimizers: Tuple = (None, None) preprocess_logits_for_metrics: Optional = None peft_config: Optional = None compute_metrics: Optional = None )

Parameters

  • model (transformers.PreTrainedModel) — The model to train, preferably an AutoModelForSequenceClassification.
  • args (ORPOConfig) — The ORPO config arguments to use for training.
  • data_collator (transformers.DataCollator) — The data collator to use for training. If None is specified, the default data collator (DPODataCollatorWithPadding) will be used which will pad the sequences to the maximum length of the sequences in the batch, given a dataset of paired sequences.
  • train_dataset (datasets.Dataset) — The dataset to use for training.
  • eval_dataset (datasets.Dataset) — The dataset to use for evaluation.
  • processing_class (PreTrainedTokenizerBase or BaseImageProcessor or FeatureExtractionMixin or ProcessorMixin, optional) — Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model.
  • model_init (Callable[[], transformers.PreTrainedModel]) — The model initializer to use for training. If None is specified, the default model initializer will be used.
  • callbacks (List[transformers.TrainerCallback]) — The callbacks to use for training.
  • optimizers (Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]) — The optimizer and scheduler to use for training.
  • preprocess_logits_for_metrics (Callable[[torch.Tensor, torch.Tensor], torch.Tensor]) — The function to use to preprocess the logits before computing the metrics.
  • peft_config (Dict, defaults to None) — The PEFT configuration to use for training. If you pass a PEFT configuration, the model will be wrapped in a PEFT model.
  • compute_metrics (Callable[[EvalPrediction], Dict], optional) — The function to use to compute the metrics. Must take a EvalPrediction and return a dictionary string to metric values.

Initialize ORPOTrainer.

build_tokenized_answer

< >

( prompt answer )

Llama tokenizer does satisfy enc(a + b) = enc(a) + enc(b). It does ensure enc(a + b) = enc(a) + enc(a + b)[len(enc(a)):]. Reference: https://github.com/EleutherAI/lm-evaluation-harness/pull/531#issuecomment-1595586257

concatenated_forward

< >

( model: Module batch: Dict )

Run the given model on the given batch of inputs, concatenating the chosen and rejected inputs together.

We do this to avoid doing two forward passes, because it’s faster for FSDP.

concatenated_inputs

< >

( batch: Dict is_encoder_decoder: bool = False label_pad_token_id: int = -100 padding_value: int = 0 device: Optional = None )

Concatenate the chosen and rejected inputs into a single tensor.

create_model_card

< >

( model_name: Optional = None dataset_name: Optional = None tags: Union = None )

Parameters

  • model_name (str, optional, defaults to None) — The name of the model.
  • dataset_name (str, optional, defaults to None) — The name of the dataset used for training.
  • tags (str, List[str] or None, optional, defaults to None) — Tags to be associated with the model card.

Creates a draft of a model card using the information available to the Trainer.

evaluation_loop

< >

( dataloader: DataLoader description: str prediction_loss_only: Optional = None ignore_keys: Optional = None metric_key_prefix: str = 'eval' )

Overriding built-in evaluation loop to store metrics for each batch. Prediction/evaluation loop, shared by Trainer.evaluate() and Trainer.predict().

Works both with or without labels.

generate_from_model

< >

( model batch: Dict )

Generate samples from the model and reference model for the given batch of inputs.

get_batch_logps

< >

( logits: FloatTensor labels: LongTensor average_log_prob: bool = False label_pad_token_id: int = -100 is_encoder_decoder: bool = False )

Compute the log probabilities of the given labels under the given logits.

get_batch_loss_metrics

< >

( model batch: Dict train_eval: Literal = 'train' )

Compute the ORPO loss and other metrics for the given batch of inputs for train or test.

log

< >

( logs: Dict )

Parameters

  • logs (Dict[str, float]) — The values to log.

Log logs on the various objects watching training, including stored metrics.

odds_ratio_loss

< >

( policy_chosen_logps: FloatTensor policy_rejected_logps: FloatTensor ) A tuple of three tensors

Returns

A tuple of three tensors

(losses, chosen_rewards, rejected_rewards). The losses tensor contains the ORPO loss for each example in the batch. The chosen_rewards and rejected_rewards tensors contain the rewards for the chosen and rejected responses, respectively. The log odds ratio of the chosen responses over the rejected responses ratio for logging purposes. The log(sigmoid(log_odds_chosen)) for logging purposes.

Compute ORPO’s odds ratio (OR) loss for a batch of policy and reference model log probabilities.

tokenize_row

< >

( feature model: Union = None )

Tokenize a single row from a ORPO specific dataset.

At this stage, we don’t convert to PyTorch tensors yet; we just handle the truncation in case the prompt + chosen or prompt + rejected responses is/are too long. First we truncate the prompt; if we’re still too long, we truncate the chosen/rejected.

We also create the labels for the chosen/rejected responses, which are of length equal to the sum of the length of the prompt and the chosen/rejected response, with label_pad_token_id for the prompt tokens.

ORPOConfig

class trl.ORPOConfig

< >

( output_dir: str overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False eval_strategy: Union = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: Optional = None per_gpu_eval_batch_size: Optional = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: Optional = None eval_delay: Optional = 0 torch_empty_cache_steps: Optional = None learning_rate: float = 1e-06 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: Union = 'linear' lr_scheduler_kwargs: Union = <factory> warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: Optional = 'passive' log_level_replica: Optional = 'warning' log_on_each_node: bool = True logging_dir: Optional = None logging_strategy: Union = 'steps' logging_first_step: bool = False logging_steps: float = 500 logging_nan_inf_filter: bool = True save_strategy: Union = 'steps' save_steps: float = 500 save_total_limit: Optional = None save_safetensors: Optional = True save_on_each_node: bool = False save_only_model: bool = False restore_callback_states_from_checkpoint: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: Optional = None jit_mode_eval: bool = False use_ipex: bool = False bf16: bool = False fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: Optional = None local_rank: int = -1 ddp_backend: Optional = None tpu_num_cores: Optional = None tpu_metrics_debug: bool = False debug: Union = '' dataloader_drop_last: bool = False eval_steps: Optional = None dataloader_num_workers: int = 0 dataloader_prefetch_factor: Optional = None past_index: int = -1 run_name: Optional = None disable_tqdm: Optional = None remove_unused_columns: Optional = True label_names: Optional = None load_best_model_at_end: Optional = False metric_for_best_model: Optional = None greater_is_better: Optional = None ignore_data_skip: bool = False fsdp: Union = '' fsdp_min_num_params: int = 0 fsdp_config: Union = None fsdp_transformer_layer_cls_to_wrap: Optional = None accelerator_config: Union = None deepspeed: Union = None label_smoothing_factor: float = 0.0 optim: Union = 'adamw_torch' optim_args: Optional = None adafactor: bool = False group_by_length: bool = False length_column_name: Optional = 'length' report_to: Union = None ddp_find_unused_parameters: Optional = None ddp_bucket_cap_mb: Optional = None ddp_broadcast_buffers: Optional = None dataloader_pin_memory: bool = True dataloader_persistent_workers: bool = False skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: Optional = None hub_model_id: Optional = None hub_strategy: Union = 'every_save' hub_token: Optional = None hub_private_repo: bool = False hub_always_push: bool = False gradient_checkpointing: bool = False gradient_checkpointing_kwargs: Union = None include_inputs_for_metrics: bool = False include_for_metrics: List = <factory> eval_do_concat_batches: bool = True fp16_backend: str = 'auto' evaluation_strategy: Union = None push_to_hub_model_id: Optional = None push_to_hub_organization: Optional = None push_to_hub_token: Optional = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: Optional = None ray_scope: Optional = 'last' ddp_timeout: Optional = 1800 torch_compile: bool = False torch_compile_backend: Optional = None torch_compile_mode: Optional = None dispatch_batches: Optional = None split_batches: Optional = None include_tokens_per_second: Optional = False include_num_input_tokens_seen: Optional = False neftune_noise_alpha: Optional = None optim_target_modules: Union = None batch_eval_metrics: bool = False eval_on_start: bool = False use_liger_kernel: Optional = False eval_use_gather_object: Optional = False average_tokens_across_devices: Optional = False max_length: Optional = None max_prompt_length: Optional = None max_completion_length: Optional = None beta: float = 0.1 disable_dropout: bool = True label_pad_token_id: int = -100 padding_value: Optional = None truncation_mode: str = 'keep_end' generate_during_eval: bool = False is_encoder_decoder: Optional = None model_init_kwargs: Optional = None dataset_num_proc: Optional = None )

Parameters

  • learning_rate (float, optional, defaults to 1e-6) — Initial learning rate for AdamW optimizer. The default value replaces that of TrainingArguments.
  • max_length (Optional[int], optional, defaults to None) — Maximum length of the sequences (prompt + completion) in the batch. This argument is required if you want to use the default data collator.
  • max_prompt_length (Optional[int], optional, defaults to None) — Maximum length of the prompt. This argument is required if you want to use the default data collator.
  • max_completion_length (Optional[int], optional, defaults to None) — Maximum length of the completion. This argument is required if you want to use the default data collator and your model is an encoder-decoder.
  • beta (float, optional, defaults to 0.1) — Parameter controlling the relative ratio loss weight in the ORPO loss. In the paper, it is denoted by λ. In the code, it is denoted by alpha.
  • disable_dropout (bool, optional, defaults to True) — Whether to disable dropout in the model.
  • label_pad_token_id (int, optional, defaults to -100) — Label pad token id. This argument is required if you want to use the default data collator.
  • padding_value (Optional[int], optional, defaults to None) — Padding value to use. If None, the padding value of the tokenizer is used.
  • truncation_mode (str, optional, defaults to "keep_end") — Truncation mode to use when the prompt is too long. Possible values are "keep_end" or "keep_start". This argument is required if you want to use the default data collator.
  • generate_during_eval (bool, optional, defaults to False) — If True, generates and logs completions from the model to W&B during evaluation.
  • is_encoder_decoder (Optional[bool], optional, defaults to None) — When using the model_init argument (callable) to instantiate the model instead of the model argument, you need to specify if the model returned by the callable is an encoder-decoder model.
  • model_init_kwargs (Optional[Dict[str, Any]], optional, defaults to None) — Keyword arguments to pass to AutoModelForCausalLM.from_pretrained when instantiating the model from a string.
  • dataset_num_proc (Optional[int], optional, defaults to None) — Number of processes to use for processing the dataset.

Configuration class for the ORPOTrainer.

Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line.

< > Update on GitHub