Model merging has emerged as an effective approach to combine multiple single-task models, fine-tuned from the same pre-trained model, into a multitask model. This process typically involves computing a weighted average of the model parameters without any additional training. Existing model-merging methods focus on enhancing average task accuracy. However, interference and conflicts between the objectives of different tasks can lead to trade-offs during model merging. In real-world applications, a set of solutions with various trade-offs can be more informative, helping practitioners make decisions based on diverse preferences. In this paper, we introduce a novel low-compute algorithm, Model Merging with Amortized Pareto Front (MAP). MAP identifies a Pareto set of scaling coefficients for merging multiple models to reflect the trade-offs. The core component of MAP is approximating the evaluation metrics of the various tasks using a quadratic approximation surrogate model derived from a pre-selected set of scaling coefficients, enabling amortized inference. Experimental results on vision and natural language processing tasks show that MAP can accurately identify the Pareto front. To further reduce the required computation of MAP, we propose (1) a Bayesian adaptive sampling algorithm and (2) a nested merging scheme with multiple stages.
翻译:模型融合已成为一种有效方法,可将从同一预训练模型微调得到的多个单任务模型合并为多任务模型。该过程通常涉及计算模型参数的加权平均值,无需任何额外训练。现有模型融合方法主要致力于提升平均任务准确率。然而,不同任务目标间的干扰与冲突可能导致模型融合过程中的性能权衡。在实际应用中,具有多种权衡效果的解决方案集合能提供更丰富的信息,帮助实践者根据不同的偏好做出决策。本文提出一种新颖的低计算量算法——摊销帕累托前沿模型融合法。该方法通过确定用于融合多个模型的缩放系数帕累托集来反映性能权衡。其核心组件是利用从预选缩放系数集合推导出的二次逼近代理模型,对各项任务的评估指标进行近似,从而实现摊销推理。在视觉与自然语言处理任务上的实验结果表明,该方法能准确识别帕累托前沿。为进一步降低计算需求,我们提出:(1)贝叶斯自适应采样算法;(2)多阶段嵌套融合方案。