In the real world, a learning-enabled system usually undergoes multiple cycles of model development to enhance the system's ability to handle difficult or emerging tasks. This continual model development process raises a significant issue that the model development for acquiring new or improving existing capabilities may inadvertently lose capabilities of the old model, also known as catastrophic forgetting. Existing continual learning studies focus on mitigating catastrophic forgetting by trading off performance on previous tasks and new tasks to ensure good average performance. However, they are inadequate for many applications especially in safety-critical domains, as failure to strictly preserve the good performance of the old model not only introduces safety risks and uncertainties but also imposes substantial expenses in the re-improving and re-validation of existing properties. To address this issue, we introduce model developmental safety as a guarantee of a learning system such that in the model development process the new model should strictly preserve the existing protected capabilities of the old model while improving its performance on target tasks. To ensure the model developmental safety, we present a retention-centric framework by formulating the model developmental safety as data-dependent constraints. Under this framework, we study how to develop a pretrained vision-language model, specifically the CLIP model, for acquiring new capabilities or improving existing capabilities of image classification. We propose an efficient constrained optimization algorithm with theoretical guarantee and use its insights to finetune a CLIP model with task-dependent heads for promoting the model developmental safety. Our experiments on improving vision perception capabilities on autonomous driving and scene recognition datasets demonstrate the efficacy of the proposed approach.
翻译:在现实世界中,具备学习能力的系统通常需经历多轮模型开发,以提升系统处理困难或新兴任务的能力。这种持续的模型开发过程引发了一个重要问题:为获取新能力或改进现有能力而进行的模型开发,可能会无意中丧失旧模型的能力,即所谓的灾难性遗忘。现有的持续学习研究侧重于通过权衡先前任务与新任务的性能来缓解灾难性遗忘,以确保良好的平均性能。然而,对于许多应用场景,尤其是在安全关键领域,这些方法并不充分,因为未能严格保持旧模型的优良性能不仅会引入安全风险和不确定性,还会在现有属性的重新改进和重新验证过程中产生巨大开销。为解决此问题,我们提出了模型发展安全性作为学习系统的保证,即在模型开发过程中,新模型应严格保持旧模型已有的受保护能力,同时提升其在目标任务上的性能。为确保模型发展安全性,我们提出了一个以保留为中心的框架,将模型发展安全性表述为数据依赖的约束。在此框架下,我们研究了如何开发预训练的视觉-语言模型(特别是CLIP模型),以获取图像分类的新能力或改进现有能力。我们提出了一种具有理论保证的高效约束优化算法,并利用其洞见对带有任务相关头部的CLIP模型进行微调,以促进模型发展安全性。我们在自动驾驶和场景识别数据集上改进视觉感知能力的实验证明了所提方法的有效性。