Model merging is an efficient empowerment technique in the machine learning community that does not require the collection of raw training data and does not require expensive computation. As model merging becomes increasingly prevalent across various fields, it is crucial to understand the available model merging techniques comprehensively. However, there is a significant gap in the literature regarding a systematic and thorough review of these techniques. This survey provides a comprehensive overview of model merging methods and theories, their applications in various domains and settings, and future research directions. Specifically, we first propose a new taxonomic approach that exhaustively discusses existing model merging methods. Secondly, we discuss the application of model merging techniques in large language models, multimodal large language models, and more than ten machine learning subfields, including continual learning, multi-task learning, few-shot learning, etc. Finally, we highlight the remaining challenges of model merging and discuss future research directions. A comprehensive list of papers about model merging is available at https://github.com/EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications.
翻译:模型融合是机器学习领域一种高效的赋能技术,它无需收集原始训练数据,也无需昂贵的计算开销。随着模型融合在各个领域日益普及,全面理解现有的模型融合技术至关重要。然而,现有文献在系统而彻底地综述这些技术方面存在显著空白。本综述全面概述了模型融合的方法与理论、其在各领域和场景下的应用以及未来的研究方向。具体而言,我们首先提出了一种新的分类方法,详尽讨论了现有的模型融合方法。其次,我们探讨了模型融合技术在大语言模型、多模态大语言模型以及持续学习、多任务学习、小样本学习等十余个机器学习子领域中的应用。最后,我们强调了模型融合面临的剩余挑战,并讨论了未来的研究方向。关于模型融合的论文完整列表可在 https://github.com/EnnengYang/Awesome-Model-Merging-Methods-Theories-Applications 获取。