On the one hand, there has been considerable progress on neural network verification in recent years, which makes certifying neural networks a possibility. On the other hand, neural networks in practice are often re-trained over time to cope with new data distribution or for solving different tasks (a.k.a. continual learning). Once re-trained, the verified correctness of the neural network is likely broken, particularly in the presence of the phenomenon known as catastrophic forgetting. In this work, we propose an approach called certified continual learning which improves existing continual learning methods by preserving, as long as possible, the established correctness properties of a verified network. Our approach is evaluated with multiple neural networks and on two different continual learning methods. The results show that our approach is efficient and the trained models preserve their certified correctness and often maintain high utility.
翻译:一方面,近年来神经网络验证取得了显著进展,使得对神经网络进行认证成为可能。另一方面,实际应用中的神经网络通常需要随时间重新训练,以适应新的数据分布或解决不同任务(即持续学习)。一旦重新训练,神经网络的已验证正确性很可能被破坏,尤其是在存在灾难性遗忘现象的情况下。本研究提出了一种名为认证持续学习的方法,该方法通过尽可能长时间地保持已验证网络的既定正确性属性,改进了现有的持续学习方法。我们的方法在多个神经网络和两种不同的持续学习方法上进行了评估。结果表明,该方法具有高效性,训练后的模型能够保持其认证正确性,并且通常能维持较高的实用性。