Training with mixed data distributions is a common and important part of creating multi-task and instruction-following models. The diversity of the data distributions and cost of joint training makes the optimization procedure extremely challenging. Data mixing methods partially address this problem, albeit having a sub-optimal performance across data sources and require multiple expensive training runs. In this paper, we propose a simple and efficient alternative for better optimization of the data sources by combining models individually trained on each data source with the base model using basic element-wise vector operations. The resulting model, namely Distribution Edited Model (DEM), is 11x cheaper than standard data mixing and outperforms strong baselines on a variety of benchmarks, yielding upto 6.2% improvement on MMLU, 11.5% on BBH, 16.1% on DROP, 6% on MathQA, and 9.3% on HELM with models of size 3B to 13B. Notably, DEM does not require full re-training when modifying a single data-source, thus making it very flexible and scalable for training with diverse data sources.
翻译:在构建多任务及指令跟随模型的过程中,混合数据分布的训练是一项常见且重要的环节。数据分布的多样性与联合训练的高成本使得优化过程极具挑战性。数据混合方法虽能部分缓解此问题,但其在各数据源上的性能表现仍欠佳,且通常需要多次昂贵的训练运行。本文提出一种简单高效的替代方案,通过基础逐元素向量运算,将各数据源上单独训练的模型与基础模型相结合,从而实现对数据源的更优优化。所得模型即分布编辑模型(DEM),其训练成本比标准数据混合方法降低11倍,并在多种基准测试中优于强基线模型,在3B至13B规模的模型上实现了最高6.2%的MMLU提升、11.5%的BBH提升、16.1%的DROP提升、6%的MathQA提升以及9.3%的HELM提升。值得注意的是,当需要修改单一数据源时,DEM无需完整重新训练,这使其在处理多样化数据源的训练任务时具备高度的灵活性与可扩展性。