arxiv_audio / abstract /2112.09332.txt
taesiri's picture
Upload abstract/2112.09332.txt with huggingface_hub
306bdab
raw
history blame contribute delete
891 Bytes
We fine-tune GPT-3 to answer long-form questions using a text-based web-browsing environment, which allows the model to search and navigate the web. By setting up the task so that it can be performed by humans, we are able to train models on the task using imitation learning, and then optimize answer quality with human feedback. To make human evaluation of factual accuracy easier, models must collect references while browsing in support of their answers. We train and evaluate our models on ELI5, a dataset of questions asked by Reddit users. Our best model is obtained by fine-tuning GPT-3 using behavior cloning, and then performing rejection sampling against a reward model trained to predict human preferences. This model's answers are preferred by humans 56 percent of the time to those of our human demonstrators, and 69 percent of the time to the highest-voted answer from Reddit.