Merging Large Language Models (LLMs) is a cost-effective technique for combining multiple expert LLMs into a single versatile model, retaining the expertise of the original ones. However, current approaches often overlook the importance of safety alignment during merging, leading to highly misaligned models. This work investigates the effects of model merging on alignment. We evaluate several popular model merging techniques, demonstrating that existing methods do not only transfer domain expertise but also propagate misalignment. We propose a simple two-step approach to address this problem: (i) generating synthetic safety and domain-specific data, and (ii) incorporating these generated data into the optimization process of existing data-aware model merging techniques. This allows us to treat alignment as a skill that can be maximized in the resulting merged LLM. Our experiments illustrate the effectiveness of integrating alignment-related data during merging, resulting in models that excel in both domain expertise and alignment.
翻译:大语言模型(LLM)融合是一种将多个专家LLM合并为单一通用模型的经济高效技术,能够保留原始模型的专家能力。然而,当前方法在融合过程中常常忽视安全对齐的重要性,导致生成严重失准的模型。本研究探讨模型融合对对齐效果的影响。我们评估了多种主流模型融合技术,证明现有方法不仅会迁移领域专业知识,还会传播失准问题。为此,我们提出一种简单的两步解决方案:(i)生成合成安全数据与领域特定数据;(ii)将这些生成数据整合至现有数据感知型模型融合技术的优化过程中。该方法使我们将对齐视为一种可被最大化植入融合后LLM的技能。实验结果表明,在融合过程中整合对齐相关数据能有效提升模型性能,最终获得在领域专长与安全对齐两方面均表现卓越的模型。