Upload abstract/2304.06762.txt with huggingface_hub
Browse files- abstract/2304.06762.txt +1 -0
abstract/2304.06762.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Large decoder-only language models can be largely improved in terms of perplexity by retrieval, such as RETRO, but its impact on text generation quality and downstream task accuracy is unclear. Thus, it is still an open question: shall we pretrain large autoregressive language models with retrieval? To answer this question, we perform a comprehensive study on a scalable pretrained retrieval-augmented language model, RETRO, compared with standard GPT and retrieval-augmented GPT incorporated at the fine-tuning or inference stages. We first provide the recipe to reproduce RETRO up to 9.5 billion parameters while retrieving a text corpus with 330 billion tokens. Based on that, we have the following novel findings: 1) RETRO outperforms GPT on text generation with much less repetition, moderately higher factual accuracy, and slightly lower toxicity with a nontoxic retrieval database. 2) On the LM Evaluation Harness benchmark, RETRO largely outperforms GPT on knowledge-intensive tasks but is on par with GPT on other tasks. Furthermore, we introduce a simple variant of the model, RETRO++, which largely improves open-domain question answering results of the original RETRO and significantly outperforms retrieval-augmented GPT across different model sizes. Our findings highlight the promising direction of pretraining autoregressive language models with retrieval as future foundation models. We release our implementation at the project's website.
|