Recent research has increasingly focused on reconciling the reasoning capabilities of System 2 with the efficiency of System 1. While existing training-based and prompt-based approaches face significant challenges in terms of efficiency and stability, model merging emerges as a promising strategy to integrate the diverse capabilities of different Large Language Models (LLMs) into a unified model. However, conventional model merging methods often assume uniform importance across layers, overlooking the functional heterogeneity inherent in neural components. To address this limitation, we propose \textbf{A}ctivation-Guided \textbf{C}onsensus \textbf{M}erging (\textbf{ACM}), a plug-and-play merging framework that determines layer-specific merging coefficients based on mutual information between activations of pre-trained and fine-tuned models. ACM effectively preserves task-specific capabilities without requiring gradient computations or additional training. Extensive experiments on Long-to-Short (L2S) and general merging tasks demonstrate that ACM consistently outperforms all baseline methods. For instance, in the case of Qwen-7B models, TIES-Merging equipped with ACM achieves a \textbf{55.3\%} reduction in response length while simultaneously improving reasoning accuracy by \textbf{1.3} points.
翻译:近期研究日益关注如何将系统二的推理能力与系统一的效率相协调。尽管现有的基于训练和提示的方法在效率与稳定性方面面临显著挑战,模型融合作为一种有前景的策略,能够将不同大型语言模型(LLMs)的多样化能力整合至统一模型中。然而,传统的模型融合方法通常假设各层具有均匀的重要性,忽视了神经组件固有的功能异质性。为克服这一局限,我们提出基于激活引导的共识融合(Activation-Guided Consensus Merging,ACM),这是一种即插即用的融合框架,其基于预训练模型与微调模型激活之间的互信息来确定层特定的融合系数。ACM 能够有效保留任务特定能力,且无需梯度计算或额外训练。在长到短(L2S)及通用融合任务上的大量实验表明,ACM 始终优于所有基线方法。例如,在 Qwen-7B 模型案例中,结合 ACM 的 TIES-Merging 方法在将响应长度减少 55.3% 的同时,推理准确率提升了 1.3 个百分点。