We introduce Instruct-SkillMix, an automated approach for creating diverse, high quality SFT data. The Instruct-SkillMix pipeline involves two stages, each leveraging an existing powerful LLM: (1) Skill extraction: uses the LLM to extract core "skills" for instruction-following, either from existing datasets, or by directly prompting the model; (2) Data generation: uses the powerful LLM to generate (instruction, response) data that exhibit a randomly chosen pair of these skills. Here, the use of random skill combinations promotes diversity and difficulty. Vanilla SFT (i.e., no PPO, DPO, or RL methods) on data generated from Instruct-SkillMix leads to strong gains on instruction following benchmarks such as AlpacaEval 2.0, MT-Bench, and WildBench. With just $4$K examples, LLaMA-3-8B-Base achieves 42.76% length-controlled win rate on AlpacaEval 2.0. To our knowledge, this achieves state-of-the-art performance among all models that have only undergone SFT (no RL methods) and competes with proprietary models such as Claude 3 Opus and LLaMA-3.1-405B-Instruct. Ablation studies also suggest plausible reasons for why creating open instruction-tuning datasets via naive crowd-sourcing has proved difficult. Introducing low quality answers ("shirkers") in $20\%$ of Instruct-SkillMix examples causes performance to plummet, sometimes catastrophically. The Instruct-SkillMix pipeline is flexible and is adaptable to other settings.
翻译:我们介绍了Instruct-SkillMix,一种用于创建多样化、高质量监督微调数据的自动化方法。Instruct-SkillMix流水线包含两个阶段,每个阶段都利用现有的大语言模型:(1)技能提取:使用大语言模型从现有数据集中或通过直接提示模型,提取指令跟随的核心“技能”;(2)数据生成:使用大语言模型生成展现随机选取的技能对的(指令,响应)数据。在此,随机技能组合的使用促进了多样性和难度提升。仅使用由Instruct-SkillMix生成的数据进行标准监督微调(即不使用近端策略优化、直接偏好优化或强化学习方法),即可在AlpacaEval 2.0、MT-Bench和WildBench等指令跟随基准测试上带来显著提升。仅使用$4$K个示例,LLaMA-3-8B-Base在AlpacaEval 2.0上实现了42.76%的长度控制胜率。据我们所知,这在所有仅经过监督微调(未使用强化学习方法)的模型中达到了最先进的性能,并与Claude 3 Opus和LLaMA-3.1-405B-Instruct等专有模型相竞争。消融研究也揭示了为何通过简单的众包方式创建开放的指令微调数据集被证明是困难的。在$20\%$的Instruct-SkillMix示例中引入低质量答案(“逃避者”)会导致性能急剧下降,有时甚至是灾难性的。Instruct-SkillMix流水线具有灵活性,可适应其他场景。