taesiri commited on
Commit
b779499
1 Parent(s): ef72a14

Upload abstract/2310.13961.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. abstract/2310.13961.txt +1 -0
abstract/2310.13961.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Using in-context learning (ICL) for data generation, techniques such as Self-Instruct (Wang et al., 2023) or the follow-up Alpaca (Taori et al., 2023) can train strong conversational agents with only a small amount of human supervision. One limitation of these approaches is that they resort to very large language models (around 175 billion parameters) that are also proprietary and non-public. Here we explore the application of such techniques to language models that are much smaller (around 10 billion to 40 billion parameters) and have permissive licenses. We find the Self-Instruct approach to be less effective at these sizes and propose new ICL methods that draw on two main ideas: (a) Categorization and simplification of the ICL templates to make prompt learning easier for the language model, and (b) Ensembling over multiple language model outputs to help select high-quality synthetic examples. Our algorithm leverages the 175 Self-Instruct seed tasks and employs separate pipelines for instructions that require an input and instructions that do not. Empirical investigations with different language models show that: (1) Our proposed method yields higher-quality instruction tuning data than Self-Instruct, (2) It improves performances of both vanilla and instruction-tuned language models by significant margins, and (3) Smaller instruction-tuned language models generate more useful outputs than their larger un-tuned counterparts. Our codebase is available at the project's website.