The composition of training data mixtures is critical for effectively training large language models (LLMs), as it directly impacts their performance on downstream tasks. Our goal is to identify an optimal data mixture to specialize an LLM for a specific task with access to only a few examples. Traditional approaches to this problem include ad-hoc reweighting methods, importance sampling, and gradient alignment techniques. This paper focuses on gradient alignment and introduces Dynamic Gradient Alignment (DGA), a scalable online gradient alignment algorithm. DGA dynamically estimates the pre-training data mixture on which the models' gradients align as well as possible with those of the model on the specific task. DGA is the first gradient alignment approach that incurs minimal overhead compared to standard pre-training and outputs a competitive model, eliminating the need for retraining the model. Experimentally, we demonstrate significant improvements over importance sampling in two key scenarios: (i) when the pre-training set is small and importance sampling overfits due to limited data; and (ii) when there is insufficient specialized data, trapping importance sampling on narrow pockets of data. Our findings underscore the effectiveness of gradient alignment methods in optimizing training data mixtures, particularly in data-constrained environments, and offer a practical solution for enhancing LLM performance on specific tasks with limited data availability.
翻译:训练数据混合的构成对于有效训练大型语言模型(LLM)至关重要,因为它直接影响模型在下游任务上的性能。我们的目标是确定一种最优的数据混合方案,以便在仅能获取少量示例的情况下,使LLM专门适应特定任务。解决该问题的传统方法包括临时重加权方法、重要性采样以及梯度对齐技术。本文聚焦于梯度对齐,并提出了动态梯度对齐(DGA)——一种可扩展的在线梯度对齐算法。DGA动态地估计预训练数据混合,使得模型在该混合上的梯度尽可能与模型在特定任务上的梯度对齐。DGA是首个相比标准预训练仅产生最小开销、并能输出具有竞争力模型的梯度对齐方法,从而无需重新训练模型。实验表明,在两种关键场景下,DGA相比重要性采样取得了显著改进:(i)当预训练数据集较小,且重要性采样因数据有限而过拟合时;(ii)当专门化数据不足,导致重要性采样陷入数据的狭窄区域时。我们的研究结果强调了梯度对齐方法在优化训练数据混合方面的有效性,特别是在数据受限的环境中,并为在数据可用性有限的情况下提升LLM在特定任务上的性能提供了一种实用解决方案。