--- title: README emoji: βš™οΈ colorFrom: blue colorTo: purple sdk: static pinned: false short_description: Unsupervised Human Preference Learning --- # [LINK TO PAPER](https://arxiv.org/abs/2410.03731) # βš™οΈ Preference Agents Preference Agents is an organization focused on developing and releasing small language models ("preference agents") that enable efficient personalization of larger language models (LLMs). Our agents learn user preferences and generate natural language rules that guide LLMs to produce tailored content, without requiring extensive fine-tuning of the larger models. ## 🎯 Our Approach We train small, locally deployable language models to act as "steering wheels" for larger, pre-trained LLMs. These agents learn user preferences from small, personalized datasets and encode these preferences into concise natural language rules. These rules are then provided as context to the larger LLM, guiding its output towards the desired personalized style and content. ## πŸ“¦ Resources ### Datasets We release three datasets for research on personalized language modeling: * **Enron-42k:** A curated subset of the Enron email corpus, focused on original content creation. It contains approximately 40,240 emails from 191 unique senders. * **The New Yorker:** A curated subset of the All The News 2.0 corpus, containing 4000 articles from the New Yorker. * **LAMP 3U Subset:** A subset of the LAMP 3U Amazon product reviews dataset, containing 22,500 reviews from 15 users. Both datasets are licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. ## πŸš€ How to Use Our Resources ### Datasets ```python from datasets import load_dataset enron_dataset = load_dataset("preference-agents/Enron-42k", split="train") lamp_dataset = load_dataset("preference-agents/LAMP-3U-Subset", split="train") ``` ## πŸ“Š Evaluation Our approach was evaluated using GPT-4o and human evaluations, demonstrating significant improvements over baselines like zero-shot generation, few-shot learning, and naive fine-tuning. ## πŸ“œ Citation If you use our resources in your research or applications, please cite our paper: ```bibtex @misc{shashidhar2024unsupervisedhumanpreferencelearning, title={Unsupervised Human Preference Learning}, author={Sumuk Shashidhar and Abhinav Chinta and Vaibhav Sahai and Dilek Hakkani-TΓΌr}, year={2024}, eprint={2410.03731}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.03731}, } ``` ## πŸ™ Acknowledgements We thank Meta AI for providing the Llama-3 models, Google AI for access to Gemini 1.5 Pro, and Anthropic for access to Claude 3.5 Sonnet. We also acknowledge the creators of the Enron email corpus and the LAMP 3U dataset for making their valuable resources available to the research community.