Instruction tuning enables language models to more effectively generalize and better follow user intent. However, obtaining instruction data is costly and challenging. Prior work employs methods such as expensive human annotation, crowd-sourced datasets with alignment issues, and generating noisy examples via LLMs. We introduce the LongForm-C dataset, which is created by reverse instructions. We generate instructions via LLMs for human-written corpus examples using reverse instructions. First we select a diverse set of human-written documents from corpora such as C4 and Wikipedia; then we generate instructions for these documents via LLMs. This approach provides a cheaper and cleaner instruction-tuning dataset with natural output and one suitable for long text generation. Our models outperform 10x larger language models without instruction tuning on tasks such as story/recipe generation and long-form question answering. Moreover, LongForm models outperform prior instruction-tuned models such as FLAN-T5 and Alpaca by a large margin, and improve language understanding capabilities further. We publicly release our data and models: https://github.com/akoksal/LongForm.
翻译:指令微调使语言模型能更有效地泛化并更好地遵循用户意图。然而,获取指令数据成本高昂且具有挑战性。先前的研究采用了诸如昂贵的人工标注、存在对齐问题的众包数据集,以及通过大语言模型生成噪声示例等方法。我们介绍了通过逆向指令创建的LongForm-C数据集。我们利用逆向指令,基于人工撰写的语料库示例,通过大语言模型生成指令。首先,我们从C4和维基百科等语料库中选取多样化的人工撰写文档;随后,通过大语言模型为这些文档生成指令。该方法提供了一种更经济、更纯净的指令微调数据集,其输出自然且适用于长文本生成。在故事/食谱生成和长文本问答等任务上,我们的模型性能优于未经指令微调、规模大10倍的语言模型。此外,LongForm模型大幅超越了先前的指令微调模型(如FLAN-T5和Alpaca),并进一步提升了语言理解能力。我们公开发布了数据与模型:https://github.com/akoksal/LongForm。