Model merging has become one of the key technologies for enhancing the capabilities and efficiency of Large Language Models (LLMs). However, our understanding of the expected performance gains and principles when merging any two models remains limited. In this work, we introduce model kinship, the degree of similarity or relatedness between LLMs, analogous to biological evolution. With comprehensive empirical analysis, we find that there is a certain relationship between model kinship and the performance gains after model merging, which can help guide our selection of candidate models. Inspired by this, we propose a new model merging strategy: Top-k Greedy Merging with Model Kinship, which can yield better performance on benchmark datasets. Specifically, we discover that using model kinship as a criterion can assist us in continuously performing model merging, alleviating the degradation (local optima) in model evolution, whereas model kinship can serve as a guide to escape these traps. Code is available at https://github.com/zjunlp/ModelKinship.
翻译:模型融合已成为增强大型语言模型(LLMs)能力与效率的关键技术之一。然而,对于任意两个模型融合后预期性能提升及其原理的理解仍然有限。本研究引入模型亲缘性概念,即LLMs之间相似性或关联度的度量,类似于生物进化中的亲缘关系。通过全面的实证分析,我们发现模型亲缘性与模型融合后的性能提升存在特定关联,这有助于指导候选模型的选择。受此启发,我们提出一种新的模型融合策略:基于模型亲缘性的Top-k贪婪融合法,该方法在基准数据集上能获得更优性能。具体而言,我们发现以模型亲缘性为准则可辅助持续进行模型融合,缓解模型进化中的性能退化(局部最优)问题,而模型亲缘性能作为跳出这些优化陷阱的指引。代码发布于https://github.com/zjunlp/ModelKinship。