Graph neural networks are becoming increasingly popular in the field of machine learning due to their unique ability to process data structured in graphs. They have also been applied in safety-critical environments where perturbations inherently occur. However, these perturbations require us to formally verify neural networks before their deployment in safety-critical environments as neural networks are prone to adversarial attacks. While there exists research on the formal verification of neural networks, there is no work verifying the robustness of generic graph convolutional network architectures with uncertainty in the node features and in the graph structure over multiple message-passing steps. This work addresses this research gap by explicitly preserving the non-convex dependencies of all elements in the underlying computations through reachability analysis with (matrix) polynomial zonotopes. We demonstrate our approach on three popular benchmark datasets.
翻译:图神经网络因其处理图结构数据的独特能力,在机器学习领域日益流行。它们还被应用于安全关键环境中,此类环境中干扰是固有存在的。然而,由于神经网络容易遭受对抗性攻击,这些干扰要求我们在将其部署于安全关键环境之前对其进行形式化验证。尽管已有关于神经网络形式化验证的研究,但尚无工作能在多轮消息传递过程中,针对节点特征和图结构存在不确定性的通用图卷积网络架构进行鲁棒性验证。本文通过使用(矩阵)多项式区域体进行可达性分析,显式保留底层计算中所有元素的非凸依赖关系,从而填补了这一研究空白。我们在三个常用的基准数据集上展示了所提方法。