Text Generation
NeMo
English
nvidia
steerlm
llama2
reward model
zhilinw commited on
Commit
daadb6a
1 Parent(s): 0042fd8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -0
README.md CHANGED
@@ -1,3 +1,122 @@
1
  ---
2
  license: llama2
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
+ library_name: nemo
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ inference: false
8
+ fine-tuning: true
9
+ tags:
10
+ - nvidia
11
+ - steerlm
12
+ - llama2
13
+ datasets:
14
+ - nvidia/HelpSteer
15
+ - OpenAssistant/oasst1
16
  ---
17
+
18
+ # Llama2-13B-SteerLM-RM
19
+
20
+ ## License
21
+ The use of this model is governed by the [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/)
22
+
23
+ ## Description:
24
+ Llama2-13B-SteerLM-RM is a 13 billion parameter language model used as the Reward Model/Attribute Prediction Model in training [Llama2-70B-SteerLM-Chat](https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat)
25
+ It takes input with context length up to 4,096 tokens.
26
+
27
+ Given a conversation with multiple turns between user and assistant, it rates the following attributes (between 0 and 4) for every assistant turn.
28
+
29
+ 1. **Helpfulness**: Overall helpfulness of the response to the prompt.
30
+ 2. **Correctness**: Inclusion of all pertinent facts without errors.
31
+ 3. **Coherence**: Consistency and clarity of expression.
32
+ 4. **Complexity**: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).
33
+ 5. **Verbosity**: Amount of detail included in the response, relative to what is asked for in the prompt.
34
+
35
+
36
+ HelpSteer Paper : [HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM](http://arxiv.org/abs/2311.09528)
37
+
38
+ SteerLM Paper: [SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF](https://arxiv.org/abs/2310.05344)
39
+
40
+ Llama2-13B-SteerLM-RM is trained with NVIDIA NeMo, an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere. It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost-effective, and fast way to adopt generative AI.
41
+
42
+ ## Usage:
43
+
44
+
45
+ You can use the model with [NeMo Aligner](https://github.com/NVIDIA/NeMo-Aligner) following [SteerLM training user guide](https://docs.nvidia.com/nemo-framework/user-guide/latest/modelalignment/steerlm.html).
46
+
47
+ This model can be useful to train a model like [Llama2-70B-SteerLM-Chat](https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat) or annotate the attributes for any conversation.
48
+
49
+ 1. Spin up an inference server within the [NeMo Aligner container](https://github.com/NVIDIA/NeMo-Aligner/blob/main/Dockerfile)
50
+
51
+ ```python
52
+ python /opt/NeMo-Aligner/examples/nlp/gpt/serve_reward_model.py \
53
+ rm_model_file=Llama2-13B-SteerLM-RM.nemo \
54
+ trainer.num_nodes=1 \
55
+ trainer.devices=8 \
56
+ ++model.tensor_model_parallel_size=4 \
57
+ ++model.pipeline_model_parallel_size=1 \
58
+ inference.micro_batch_size=2 \
59
+ inference.port=1424
60
+ ```
61
+
62
+ 2. Annotate data files using the served reward model. If you are seeking to reproduce training of [Llama2-70B-SteerLM-Chat](https://huggingface.co/nvidia/Llama2-70B-SteerLM-Chat), this will be the Open Assistant train/val files.
63
+
64
+ ```python
65
+ python /opt/NeMo-Aligner/examples/nlp/data/steerlm/preprocess_openassistant_data.py --output_directory=data/oasst
66
+
67
+ python /opt/NeMo-Aligner/examples/nlp/data/steerlm/attribute_annotate.py \
68
+ --input-file=data/oasst/train.jsonl \
69
+ --output-file=data/oasst/train_labeled.jsonl \
70
+ --port=1424
71
+ ```
72
+
73
+ 3. Alternatively, this can be any conversational data file (in .jsonl) in the following format, where each line looks like
74
+
75
+ ```json
76
+ {
77
+ "conversations": [
78
+ {"value": <user_turn_1>, "from": "User", "label": None},
79
+ {"value": <assistant_turn_1>, "from": "Assistant", "label": <formatted_label_1>},
80
+ {"value": <user_turn_2>, "from": "User", "label": None},
81
+ {"value": <assistant_turn_2>, "from": "Assistant", "label": <formatted_label_2>},
82
+ ],
83
+ "mask": "User"
84
+ }
85
+ ```
86
+
87
+ Ideally, each ```<formatted_label_n>``` refers to the ground truth label for the assistant turn but if they are not available, we can also use ```quality:4,toxicity:0,humor:0,creativity:0,helpfulness:4,correctness:4,coherence:4,complexity:4,verbosity:4```
88
+
89
+
90
+
91
+ ## Contact
92
+
93
+ E-Mail: [Zhilin Wang](mailto:zhilinw@nvidia.com)
94
+
95
+
96
+ ## Citation
97
+
98
+ If you find this dataset useful, please cite the following works
99
+
100
+ ```bibtex
101
+ @misc{wang2023helpsteer,
102
+ title={HelpSteer: Multi-attribute Helpfulness Dataset for SteerLM},
103
+ author={Zhilin Wang and Yi Dong and Jiaqi Zeng and Virginia Adams and Makesh Narsimhan Sreedhar and Daniel Egert and Olivier Delalleau and Jane Polak Scowcroft and Neel Kant and Aidan Swope and Oleksii Kuchaiev},
104
+ year={2023},
105
+ eprint={2311.09528},
106
+ archivePrefix={arXiv},
107
+ primaryClass={cs.CL}
108
+ }
109
+ ```
110
+
111
+ ```bibtex
112
+ @misc{dong2023steerlm,
113
+ title={SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF},
114
+ author={Yi Dong and Zhilin Wang and Makesh Narsimhan Sreedhar and Xianchao Wu and Oleksii Kuchaiev},
115
+ year={2023},
116
+ eprint={2310.05344},
117
+ archivePrefix={arXiv},
118
+ primaryClass={cs.CL}
119
+ }
120
+ ```
121
+
122
+