Task arithmetic has emerged as a promising approach for editing models by representing task-specific knowledge as composable task vectors. However, existing methods rely on network linearization to derive task vectors, leading to computational bottlenecks during training and inference. Moreover, linearization alone does not ensure weight disentanglement, the key property that enables conflict-free composition of task vectors. To address this, we propose TaLoS which allows to build sparse task vectors with minimal interference without requiring explicit linearization and sharing information across tasks. We find that pre-trained models contain a subset of parameters with consistently low gradient sensitivity across tasks, and that sparsely updating only these parameters allows for promoting weight disentanglement during fine-tuning. Our experiments prove that TaLoS improves training and inference efficiency while outperforming current methods in task addition and negation. By enabling modular parameter editing, our approach fosters practical deployment of adaptable foundation models in real-world applications.
翻译:任务算术作为一种通过将任务特定知识表示为可组合的任务向量来编辑模型的方法,已展现出广阔前景。然而,现有方法依赖网络线性化来推导任务向量,导致训练和推理过程中出现计算瓶颈。此外,仅靠线性化并不能确保权重解耦——这是实现任务向量无冲突组合的关键特性。为解决这些问题,我们提出TaLoS方法,该方法能够构建稀疏任务向量,在无需显式线性化且不跨任务共享信息的前提下实现最小干扰。我们发现,预训练模型包含一个参数子集,这些参数在不同任务中始终具有较低的梯度敏感性,而仅稀疏更新这些参数即可在微调过程中促进权重解耦。实验证明,TaLoS在提升训练与推理效率的同时,在任务添加与否定方面的性能优于现有方法。通过实现模块化参数编辑,本方法为适应性基础模型在实际应用中的部署提供了实用路径。