Data selection can reduce the amount of training data needed to finetune LLMs; however, the efficacy of data selection scales directly with its compute. Motivated by the practical challenge of compute-constrained finetuning, we consider the setting in which both the cost of selecting data and training are budgeted for. We first formalize the problem of data selection with a cost-aware utility function, and model the data selection problem as trading off initial-selection cost for training gain. We run a comprehensive sweep of experiments across multiple tasks, varying compute budget by scaling finetuning tokens, model sizes, and data selection compute. Interestingly we find that many powerful data selection methods are almost never compute-optimal, and that cheaper data selection alternatives dominate both from a theoretical and empirical perspective. For compute-optimal training, we find that perplexity and gradient data selection require training-to-selection model size ratios of 5x and 10x, respectively.
翻译:数据选择可以减少微调大型语言模型所需的训练数据量;然而,数据选择的有效性与其计算成本直接相关。受实际计算资源受限微调挑战的启发,我们考虑在预算同时约束数据选择成本和训练成本的情况下进行研究。我们首先用成本感知的效用函数形式化数据选择问题,并将数据选择问题建模为在初始选择成本与训练收益之间进行权衡。我们在多个任务上进行了全面的实验扫描,通过调整微调令牌数量、模型大小和数据选择计算量来改变计算预算。有趣的是,我们发现许多强大的数据选择方法几乎从未达到计算最优,并且从理论和实证角度来看,更廉价的数据选择替代方案通常占主导地位。对于计算最优训练,我们发现困惑度和梯度数据选择方法分别需要训练模型与选择模型的大小比例达到5倍和10倍。