With contributions from the open-source community, a vast amount of instruction tuning (IT) data has emerged. Given the significant resource allocation required for training and evaluating models, it is advantageous to have an efficient method for selecting high-quality IT data. However, existing methods for instruction data selection have limitations such as relying on fragile external APIs, being affected by biases in GPT models, or reducing the diversity of the selected instruction dataset. In this paper, we propose an industrial-friendly, expert-aligned and diversity-preserved instruction data selection method: Clustering and Ranking (CaR). CaR employs a two-step process: first, it ranks instruction pairs using a high-accuracy (84.25%) scoring model aligned with expert preferences; second, it preserves dataset diversity through clustering. In our experiment, CaR efficiently selected a mere 1.96% of Alpaca's IT data, yet the resulting AlpaCaR model surpassed Alpaca's performance by an average of 32.1% in GPT-4 evaluations. Moreover, we find that data selecting is a consistent paradigm whether the pre-trained model is more capable or the model parameters scaling up. Our approach employs compact models with 550M parameters and incurs just 11.2% of the financial outlay of current methods, enhancing its industrial deployability.
翻译:随着开源社区的贡献,涌现了大量指令微调(IT)数据。鉴于训练和评估模型需要大量资源分配,拥有一种高效的方法来选择高质量的IT数据具有显著优势。然而,现有的指令数据选择方法存在局限性,例如依赖脆弱的外部API、受GPT模型偏差影响或降低所选指令数据集的多样性。本文提出了一种工业友好、专家对齐且保持多样性的指令数据选择方法:聚类与排序(CaR)。CaR采用两步流程:首先,使用与专家偏好对齐的高精度(84.25%)评分模型对指令对进行排序;其次,通过聚类保持数据集的多样性。在我们的实验中,CaR高效地仅选择了Alpaca IT数据的1.96%,但由此得到的AlpaCaR模型在GPT-4评估中平均性能超越了Alpaca达32.1%。此外,我们发现无论预训练模型能力更强还是模型参数规模扩大,数据选择都是一个一致的范式。我们的方法采用仅5.5亿参数的紧凑模型,且仅需当前方法11.2%的经济支出,从而增强了其工业部署可行性。