Data Advisor: Dynamic Data Curation for Safety Alignment of Large Language Models
Abstract
Data is a crucial element in large language model (LLM) alignment. Recent studies have explored using LLMs for efficient data collection. However, LLM-generated data often suffers from quality issues, with underrepresented or absent aspects and low-quality datapoints. To address these problems, we propose Data Advisor, an enhanced LLM-based method for generating data that takes into account the characteristics of the desired dataset. Starting from a set of pre-defined principles in hand, Data Advisor monitors the status of the generated data, identifies weaknesses in the current dataset, and advises the next iteration of data generation accordingly. Data Advisor can be easily integrated into existing data generation methods to enhance data quality and coverage. Experiments on safety alignment of three representative LLMs (i.e., Mistral, Llama2, and Falcon) demonstrate the effectiveness of Data Advisor in enhancing model safety against various fine-grained safety issues without sacrificing model utility.
Community
Accepted to EMNLP 2024 main conference.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SAGE-RT: Synthetic Alignment data Generation for Safety Evaluation and Red Teaming (2024)
- See What LLMs Cannot Answer: A Self-Challenge Framework for Uncovering LLM Weaknesses (2024)
- ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time (2024)
- Balancing Cost and Effectiveness of Synthetic Data Generation Strategies for LLMs (2024)
- Value Alignment from Unstructured Text (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper