The success of large language models has garnered widespread attention for model merging techniques, especially training-free methods which combine model capabilities within the parameter space. However, two challenges remain: (1) uniform treatment of all parameters leads to performance degradation; (2) search-based algorithms are often inefficient. In this paper, we present an innovative framework termed Reinforced Model Merging (RMM), which encompasses an environment and agent tailored for merging tasks. These components interact to execute layer-wise merging actions, aiming to search the optimal merging architecture. Notably, RMM operates without any gradient computations on the original models, rendering it feasible for edge devices. Furthermore, by utilizing data subsets during the evaluation process, we addressed the bottleneck in the reward feedback phase, thereby accelerating RMM by up to 100 times. Extensive experiments demonstrate that RMM achieves state-of-the-art performance across various vision and NLP datasets and effectively overcomes the limitations of the existing baseline methods. Our code is available at https://github.com/WuDiHJQ/Reinforced-Model-Merging.
翻译:大型语言模型的成功使模型融合技术获得了广泛关注,特别是无需训练的参量空间融合方法。然而,现有方法仍面临两大挑战:(1) 对所有参数进行统一处理会导致性能下降;(2) 基于搜索的算法通常效率低下。本文提出了一种创新框架——强化模型融合(RMM),该框架包含专为融合任务设计的环境与智能体。这些组件通过交互执行分层融合操作,旨在搜索最优融合架构。值得注意的是,RMM无需对原始模型进行任何梯度计算,使其可在边缘设备上运行。此外,通过在评估过程中使用数据子集,我们解决了奖励反馈阶段的瓶颈问题,从而将RMM加速高达100倍。大量实验表明,RMM在多种视觉与自然语言处理数据集上均达到了最先进的性能,并有效克服了现有基线方法的局限性。代码已开源:https://github.com/WuDiHJQ/Reinforced-Model-Merging。