This work explores the intersection of continual learning (CL) and differential privacy (DP). Crucially, continual learning models must retain knowledge across tasks, but this conflicts with the differential privacy requirement of restricting individual samples to be memorised in the model. We propose using pre-trained models to address the trade-offs between privacy and performance in a continual learning setting.More specifically, we present necessary assumptions to enable privacy-preservation and propose combining pre-trained models with parameter-free classifiers and parameter-efficient adapters that are learned under differential privacy. Our experiments demonstrate their effectiveness and provide insights into balancing the competing demands of continual learning and privacy.
翻译:本研究探讨了持续学习(CL)与差分隐私(DP)的交叉领域。关键问题在于,持续学习模型必须在不同任务间保留知识,但这与差分隐私要求限制模型记忆个体样本的原则相冲突。我们提出使用预训练模型来解决持续学习场景下隐私与性能之间的权衡。具体而言,我们提出了实现隐私保护的必要假设,并建议将预训练模型与在差分隐私下学习的无参数分类器及参数高效适配器相结合。实验结果表明了该方法的有效性,并为平衡持续学习与隐私保护的竞争性需求提供了见解。