The rapid proliferation of pretrained models and open repositories has made model merging a convenient yet risky practice, allowing free-riders to combine fine-tuned models into a new multi-capability model without authorization. Such unauthorized model merging not only violates intellectual property rights but also undermines model ownership and accountability. To address this issue, we present MergeGuard, a proactive dual-stage weight protection framework that disrupts merging compatibility while maintaining task fidelity. In the first stage, we redistribute task-relevant information across layers via L2-regularized optimization, ensuring that important gradients are evenly dispersed. In the second stage, we inject structured perturbations to misalign task subspaces, breaking curvature compatibility in the loss landscape. Together, these stages reshape the model's parameter geometry such that merged models collapse into destructive interference while the protected model remains fully functional. Extensive experiments on both vision (ViT-L-14) and language (Llama2, Gemma2, Mistral) models demonstrate that MergeGuard reduces merged model accuracy by up to 90% with less than 1.5% performance loss on the protected model.
翻译:预训练模型与开放存储库的快速扩散使得模型融合成为一种便捷但存在风险的做法,允许搭便车者未经授权地将微调模型组合成新的多能力模型。此类未经授权的模型融合不仅侵犯知识产权,还损害模型所有权与责任归属。为解决这一问题,我们提出MergeGuard——一种主动式双阶段权重保护框架,该框架在保持任务保真度的同时破坏融合兼容性。第一阶段,我们通过L2正则化优化在层间重新分配任务相关信息,确保重要梯度均匀分散。第二阶段,我们注入结构化扰动以错位任务子空间,打破损失函数景观中的曲率兼容性。这两个阶段共同重塑模型的参数几何结构,使得融合模型陷入破坏性干扰,而受保护模型仍保持完整功能。在视觉(ViT-L-14)与语言(Llama2、Gemma2、Mistral)模型上的大量实验表明,MergeGuard能将融合模型准确率降低达90%,同时受保护模型的性能损失低于1.5%。