Text Generation
NeMo
English
nvidia
llama2
jiaqiz's picture
Update README.md
6b4575e verified
|
raw
history blame
2.12 kB
metadata
license: cc-by-nc-4.0
library_name: nemo
language:
  - en
pipeline_tag: text-generation
inference: false
fine-tuning: true
tags:
  - nvidia
  - llama2
datasets:
  - Anthropic/hh-rlhf

Llama2-13B-RLHF-RM

Description:

Llama2-13B-RLHF-RM is a 13 billion parameter language model (with context of up to 4,096 tokens) used as the Reward Model in training NV-Llama2-70B-RLHF, which achieves 7.59 on MT-Bench and demonstrates strong performance on academic benchmarks.

Starting from Llama2-13B base model, it is first instruction-tuned with a combination of public and proprietary data and then trained on HH-RLHF dataset with reward modeling objective. Given a conversation with multiple turns between user and assistant, it assigns a score on overall helpfulness for the last assistant turn.

Llama2-13B-RLHF-RM is trained with NVIDIA NeMo Aligner, a scalable toolkit for efficient model alignment. NeMo-Aligner is built using the NeMo Toolkit which allows for scaling training up to 1000s of GPUs using tensor, data and pipeline parallelism for all components of alignment. All of our checkpoints are cross compatible with the NeMo ecosystem, allowing for inference deployment and further customization.

Usage:

Training a reward model is an essential component of Reinforcement Learning from Human Feedback (RLHF). By developing a strong reward model, we can mitigate the risks of reward hacking and ensure that the actor is incentivized to produce helpful responses. We are open-sourcing this reward model so that users can seamlessly integrate it with Proximal Policy Optimization (PPO) training using NeMo Aligner. For detailed instructions on how to conduct the training, please refer to our RLHF training user guide.