We propose a method to enhance the stability of a neural ordinary differential equation (neural ODE) by reducing the maximum error growth subsequent to a perturbation of the initial value. Since the stability depends on the logarithmic norm of the Jacobian matrix associated with the neural ODE, we control the logarithmic norm by perturbing the weight matrices of the neural ODE by a smallest possible perturbation (in Frobenius norm). We do so by engaging an eigenvalue optimisation problem, for which we propose a nested two-level algorithm. For a given perturbation size of the weight matrix, the inner level computes optimal perturbations of that size, while - at the outer level - we tune the perturbation amplitude until we reach the desired uniform stability bound. We embed the proposed algorithm in the training of the neural ODE to improve its robustness to perturbations of the initial value, as adversarial attacks. Numerical experiments on classical image datasets show that an image classifier including a neural ODE in its architecture trained according to our strategy is more stable than the same classifier trained in the classical way, and therefore, it is more robust and less vulnerable to adversarial attacks.
翻译:我们提出了一种增强神经常微分方程稳定性的方法,其核心在于降低初始值扰动后的最大误差增长。由于稳定性取决于神经常微分方程关联的雅可比矩阵的对数范数,我们通过以尽可能小的扰动(按弗罗贝尼乌斯范数衡量)调整神经常微分方程的权重矩阵来控制该对数范数。为此,我们引入了一个特征值优化问题,并针对该问题提出了一种嵌套的双层算法。对于给定的权重矩阵扰动幅度,内层计算该幅度下的最优扰动;而在外层,我们调整扰动幅度直至达到期望的均匀稳定性边界。我们将所提算法嵌入神经常微分方程的训练过程中,以提升其对初始值扰动(如对抗性攻击)的鲁棒性。在经典图像数据集上的数值实验表明,采用本策略训练的、在架构中包含神经常微分方程的图像分类器,比传统方式训练的同款分类器具有更高的稳定性,因此其鲁棒性更强,更不易受到对抗性攻击。