Despite advancements, fine-tuning Large Language Models (LLMs) remains costly due to the extensive parameter count and substantial data requirements for model generalization. Accessibility to computing resources remains a barrier for the open-source community. To address this challenge, we propose the In2Core algorithm, which selects a coreset by analyzing the correlation between training and evaluation samples with a trained model. Notably, we assess the model's internal gradients to estimate this relationship, aiming to rank the contribution of each training point. To enhance efficiency, we propose an optimization to compute influence functions with a reduced number of layers while achieving similar accuracy. By applying our algorithm to instruction fine-tuning data of LLMs, we can achieve similar performance with just 50% of the training data. Meantime, using influence functions to analyze model coverage to certain testing samples could provide a reliable and interpretable signal on the training set's coverage of those test points.
翻译:尽管取得了进展,但由于模型参数数量庞大且实现泛化所需数据量巨大,大语言模型的微调仍然成本高昂。计算资源的可及性仍然是开源社区面临的一个障碍。为应对这一挑战,我们提出了In2Core算法,该算法通过分析训练样本与评估样本在已训练模型下的相关性来选取核心集。值得注意的是,我们通过评估模型的内部梯度来估计这种关系,旨在对每个训练点的贡献进行排序。为提高效率,我们提出了一种优化方法,可在减少层数的情况下计算影响函数,同时达到相近的精度。通过将该算法应用于大语言模型的指令微调数据,我们仅需50%的训练数据即可实现相近的性能。同时,利用影响函数分析模型对特定测试样本的覆盖度,能够为训练集对这些测试点的覆盖情况提供可靠且可解释的信号。