Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models Paper • 2402.13064 • Published Feb 20 • 47
AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators Paper • 2303.16854 • Published Mar 29, 2023 • 1
Open-Source Large Language Models Outperform Crowd Workers and Approach ChatGPT in Text-Annotation Tasks Paper • 2307.02179 • Published Jul 5, 2023 • 7
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning Paper • 2305.14045 • Published May 23, 2023 • 5
LMTurk: Few-Shot Learners as Crowdsourcing Workers in a Language-Model-as-a-Service Framework Paper • 2112.07522 • Published Dec 14, 2021
Large Language Models are Few-Shot Clinical Information Extractors Paper • 2205.12689 • Published May 25, 2022
RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment Paper • 2304.06767 • Published Apr 13, 2023 • 2
Small Models are Valuable Plug-ins for Large Language Models Paper • 2305.08848 • Published May 15, 2023 • 3
Language Models in the Loop: Incorporating Prompting into Weak Supervision Paper • 2205.02318 • Published May 4, 2022
ZeroGen: Efficient Zero-shot Learning via Dataset Generation Paper • 2202.07922 • Published Feb 16, 2022 • 1
Constrained Language Models Yield Few-Shot Semantic Parsers Paper • 2104.08768 • Published Apr 18, 2021
Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor Paper • 2212.09689 • Published Dec 19, 2022 • 1
The Turking Test: Can Language Models Understand Instructions? Paper • 2010.11982 • Published Oct 22, 2020
Automatic Chain of Thought Prompting in Large Language Models Paper • 2210.03493 • Published Oct 7, 2022 • 2