We develop a method for the efficient verification of neural networks against convolutional perturbations such as blurring or sharpening. To define input perturbations we use well-known camera shake, box blur and sharpen kernels. We demonstrate that these kernels can be linearly parameterised in a way that allows for a variation of the perturbation strength while preserving desired kernel properties. To facilitate their use in neural network verification, we develop an efficient way of convolving a given input with these parameterised kernels. The result of this convolution can be used to encode the perturbation in a verification setting by prepending a linear layer to a given network. This leads to tight bounds and a high effectiveness in the resulting verification step. We add further precision by employing input splitting as a branch and bound strategy. We demonstrate that we are able to verify robustness on a number of standard benchmarks where the baseline is unable to provide any safety certificates. To the best of our knowledge, this is the first solution for verifying robustness against specific convolutional perturbations such as camera shake.
翻译:我们提出了一种高效验证神经网络对抗卷积扰动(如模糊化或锐化)的方法。为定义输入扰动,我们采用已知的相机抖动、方框模糊和锐化核。我们证明这些核可通过线性参数化实现扰动强度的连续调节,同时保持核的理想特性。为适配神经网络验证需求,我们开发了将给定输入与参数化核进行高效卷积的方法。该卷积结果可通过在给定网络前添加线性层来编码验证场景中的扰动,从而获得紧致的边界并提升验证步骤的有效性。我们进一步采用输入分割作为分支定界策略以提高精度。实验表明,在基线方法无法提供任何安全认证的标准测试集上,我们能够成功验证模型的鲁棒性。据我们所知,这是首个针对相机抖动等特定卷积扰动的鲁棒性验证解决方案。