Formal verification of neural networks is essential before their deployment in safety-critical applications. However, existing methods for formally verifying neural networks are not yet scalable enough to handle practical problems involving a large number of neurons. We address this challenge by introducing a fully automatic and sound reduction of neural networks using reachability analysis. The soundness ensures that the verification of the reduced network entails the verification of the original network. To the best of our knowledge, we present the first sound reduction approach that is applicable to neural networks with any type of element-wise activation function, such as ReLU, sigmoid, and tanh. The network reduction is computed on the fly while simultaneously verifying the original network and its specifications. All parameters are automatically tuned to minimize the network size without compromising verifiability. We further show the applicability of our approach to convolutional neural networks by explicitly exploiting similar neighboring pixels. Our evaluation shows that our approach can reduce the number of neurons to a fraction of the original number of neurons with minor outer-approximation and thus reduce the verification time to a similar degree.
翻译:在安全关键应用中部署神经网络之前,对其进行形式验证至关重要。然而,现有的神经网络形式验证方法尚无法充分扩展以处理涉及大量神经元的实际问题。我们通过引入一种基于可达性分析的全自动且可靠的神经网络约简方法来应对这一挑战。可靠性确保对约简后网络的验证能够等价于对原始网络的验证。据我们所知,我们提出了首个适用于任意逐元素激活函数(如ReLU、sigmoid和tanh)神经网络的可靠约简方法。该网络约简在实时验证原始网络及其规范的同时动态计算,所有参数均自动调整以在不牺牲可验证性的前提下最小化网络规模。进一步地,我们通过显式利用相邻像素的相似性,展示了该方法在卷积神经网络中的应用。实验评估表明,我们的方法可将神经元数量缩减至原始数量的极小比例,并伴随较小的外部近似误差,从而将验证时间降低至相似程度。