Rethinking Data Selection at Scale: Random Selection is Almost All You Need
Abstract
Supervised fine-tuning (SFT) is crucial for aligning Large Language Models (LLMs) with human instructions. The primary goal during SFT is to select a small yet representative subset of training data from the larger pool, such that fine-tuning with this subset achieves results comparable to or even exceeding those obtained using the entire dataset. However, most existing data selection techniques are designed for small-scale data pools, which fail to meet the demands of real-world SFT scenarios. In this paper, we replicated several self-scoring methods those that do not rely on external model assistance on two million scale datasets, and found that nearly all methods struggled to significantly outperform random selection when dealing with such large-scale data pools. Moreover, our comparisons suggest that, during SFT, diversity in data selection is more critical than simply focusing on high quality data. We also analyzed the limitations of several current approaches, explaining why they perform poorly on large-scale datasets and why they are unsuitable for such contexts. Finally, we found that filtering data by token length offers a stable and efficient method for improving results. This approach, particularly when training on long text data, proves highly beneficial for relatively weaker base models, such as Llama3.
Community
This paper, for the first time, validates all previous self-scoring data selection methods on two million-scale SFT data pools, revealing that their performance is comparable to random selection—a finding that is truly thought-provoking.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Diversify and Conquer: Diversity-Centric Data Selection with Iterative Refinement (2024)
- SFTMix: Elevating Language Model Instruction Tuning with Mixup Recipe (2024)
- Harnessing Diversity for Important Data Selection in Pretraining Large Language Models (2024)
- CommonIT: Commonality-Aware Instruction Tuning for Large Language Models via Data Partitions (2024)
- Self-Boosting Large Language Models with Synthetic Preference Data (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
nice work
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper