Model merging integrates the parameters of multiple models into a unified model, combining their diverse capabilities. Existing model merging methods are often constrained by fixed parameter merging ratios. In this study, we propose Mixup Model Merge (M$^3$), an innovative approach inspired by the Mixup data augmentation technique. This method merges the parameters of two large language models (LLMs) by randomly generating linear interpolation ratios, allowing for a more flexible and comprehensive exploration of the parameter space. Extensive experiments demonstrate the superiority of our proposed M$^3$ method in merging fine-tuned LLMs: (1) it significantly improves performance across multiple tasks, (2) it enhances LLMs' out-of-distribution (OOD) robustness and adversarial robustness, (3) it achieves superior results when combined with sparsification techniques such as DARE, and (4) it offers a simple yet efficient solution that does not require additional computational resources. In conclusion, M$^3$ is a simple yet effective model merging method that significantly enhances the performance of the merged model by randomly generating contribution ratios for two fine-tuned LLMs. The code is available at https://github.com/MLGroupJLU/MixupModelMerge.
翻译:模型融合通过整合多个模型的参数,将其多样化的能力集成到统一模型中。现有的模型融合方法通常受限于固定的参数融合比例。本研究提出Mixup模型融合(M$^3$),这是一种受Mixup数据增强技术启发的创新方法。该方法通过随机生成线性插值比例来融合两个大语言模型(LLMs)的参数,从而实现对参数空间更灵活、更全面的探索。大量实验证明了我们提出的M$^3$方法在融合微调后LLMs方面的优越性:(1)它在多项任务中显著提升了性能;(2)增强了LLMs的分布外(OOD)鲁棒性和对抗鲁棒性;(3)与DARE等稀疏化技术结合时能取得更优结果;(4)提供了一种简单高效且无需额外计算资源的解决方案。综上所述,M$^3$是一种简单而有效的模型融合方法,它通过为两个微调后的LLMs随机生成贡献比例,显著提升了融合模型的性能。代码可在https://github.com/MLGroupJLU/MixupModelMerge获取。