Contemporary practices in instruction tuning often hinge on enlarging data scaling without a clear strategy for ensuring data quality, inadvertently introducing noise that may compromise model performance. To address this challenge, we introduce \textsc{Nuggets}, a novel and efficient methodology that leverages one-shot learning to discern and select high-quality instruction data from extensive datasets. \textsc{Nuggets} assesses the potential of individual instruction examples to act as effective one-shot learning instances, thereby identifying those that can significantly improve performance across diverse tasks. \textsc{Nuggets} utilizes a scoring system based on the impact of candidate examples on the perplexity of a diverse anchor set, facilitating the selection of the most advantageous data for instruction tuning. Through comprehensive evaluations on two benchmarks, including MT-Bench and Alpaca-Eval, we show that instruction tuning with the top 1\% of examples curated by \textsc{Nuggets} substantially outperforms conventional methods employing the entire dataset.
翻译:当前指令微调的实践通常依赖于扩大数据规模,而缺乏确保数据质量的明确策略,这无意中引入了可能损害模型性能的噪声。为应对这一挑战,我们提出了 \textsc{Nuggets},一种新颖高效的方法,它利用单样本学习从海量数据集中识别和筛选高质量的指令数据。\textsc{Nuggets} 通过评估单个指令样本作为有效单样本学习实例的潜力,从而识别出那些能够显著提升模型在多样化任务上性能的样本。该方法采用一种基于候选样本对多样化锚点集困惑度影响的评分系统,以促进为指令微调选择最有利的数据。通过在 MT-Bench 和 Alpaca-Eval 两个基准上的综合评估,我们表明,使用 \textsc{Nuggets} 筛选出的前 1\% 样本进行指令微调,其性能显著优于使用整个数据集的传统方法。