Learning from a stream of tasks usually pits plasticity against stability: acquiring new knowledge often causes catastrophic forgetting of past information. Most methods address this by summing competing loss terms, creating gradient conflicts that are managed with complex and often inefficient strategies such as external memory replay or parameter regularization. We propose a reformulation of the continual learning objective using Douglas-Rachford Splitting (DRS). This reframes the learning process not as a direct trade-off, but as a negotiation between two decoupled objectives: one promoting plasticity for new tasks and the other enforcing stability of old knowledge. By iteratively finding a consensus through their proximal operators, DRS provides a more principled and stable learning dynamic. Our approach achieves an efficient balance between stability and plasticity without the need for auxiliary modules or complex add-ons, providing a simpler yet more powerful paradigm for continual learning systems.
翻译:从任务流中学习通常会面临可塑性(plasticity)与稳定性(stability)之间的权衡:获取新知识常常会导致对过去信息的灾难性遗忘。大多数方法通过叠加相互竞争的损失项来解决这一问题,但这会引发梯度冲突,进而需要通过复杂且往往低效的策略(例如外部记忆回放或参数正则化)来管理。我们提出使用道格拉斯-拉赫福德分裂法(Douglas-Rachford Splitting, DRS)来重新表述持续学习的目标。这将学习过程重新定义,不再视为直接的权衡,而是两个解耦目标之间的协商:一个目标促进对新任务的可塑性,另一个则强制保持旧知识的稳定性。通过其邻近算子迭代地寻找共识,DRS 提供了一种更具原则性且更稳定的学习动态。我们的方法在稳定性和可塑性之间实现了高效平衡,无需辅助模块或复杂的附加组件,为持续学习系统提供了一个更简单却更强大的范式。