Text Generation
Transformers
PyTorch
English
llama
conversational
text-generation-inference
Inference Endpoints
hamishivi's picture
Update README.md
8c0698c verified
|
raw
history blame
4.89 kB
metadata
model-index:
  - name: tulu-v2.5-ppo-13b-uf-mean-13b-mix-rm
    results: []
datasets:
  - allenai/tulu-2.5-preference-data
  - allenai/tulu-v2-sft-mixture
language:
  - en
base_model: allenai/tulu-v2.5-13b-preference-mix-rm
license: apache-2.0
Tulu 2.5 banner image

Model Card for Tulu V2.5 PPO 13B - UltraFeedback Mean w. 13B mixture RM

Tulu is a series of language models that are trained to act as helpful assistants. Tulu V2.5 is a series of models trained using DPO and PPO starting from the Tulu 2 suite. This model is trained on the UltraFeedback dataset (using the per-aspect/fine-grained scores for deciding chosen and rejected) using PPO. It was initialised from the Tulu v2.5 13B preference mixture RM. We used a 13B RM trained on our preference data mix, and then used the UltraFeedback prompts during PPO training.

For more details, read the paper: Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback.

.Model description

  • Model type: One model belonging to a suite of RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
  • Language(s) (NLP): English
  • License: Apache 2.0.
  • Finetuned from model: meta-llama/Llama-2-13b-hf

Model Sources

  • Repository: https://github.com/allenai/open-instruct
  • Dataset: Data used to train this model can be found here - specifically the ultrafeedback_mean_aspects split. Only the prompts were used.
  • Model Family: The collection of related models can be found here.
  • Reward Model: The reward model used during PPO training can be found here, and the data used to train it here - specifically the preference_big_mixture split.
  • Value Model: The value model trained during PPO training can be found here.

Input Format

The model is trained to use the following format (note the newlines):

<|user|>
Your message here!
<|assistant|>

For best results, format all inputs in this manner. Make sure to include a newline after <|assistant|>, this can affect generation quality quite a bit. We have included a chat template in the tokenizer implementing this template.

Intended uses & limitations

The model was initially fine-tuned on a filtered and preprocessed of the Tulu V2 mix dataset, which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. We then further aligned the model with a Jax DPO trainer built on EasyLM on the dataset mentioned above.

Bias, Risks, and Limitations

The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.

Training hyperparameters

The following hyperparameters were used during PPO training:

  • learning_rate: 1e-06
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1.0
  • KL penalty coefficient: 0.05

Citation

If you find Tulu 2.5 is useful in your work, please cite it with:

@misc{ivison2024unpacking,
      title={{Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback}}, 
      author={{Hamish Ivison and Yizhong Wang and Jiacheng Liu and Ellen Wu and Valentina Pyatkin and Nathan Lambert and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi}}
      year={2024},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}