Despite advancements in Graph Neural Networks (GNNs), adaptive attacks continue to challenge their robustness. Certified robustness based on randomized smoothing has emerged as a promising solution, offering provable guarantees that a model's predictions remain stable under adversarial perturbations within a specified range. However, existing methods face a critical trade-off between accuracy and robustness, as achieving stronger robustness requires introducing greater noise into the input graph. This excessive randomization degrades data quality and disrupts prediction consistency, limiting the practical deployment of certifiably robust GNNs in real-world scenarios where both accuracy and robustness are essential. To address this challenge, we propose \textbf{AuditVotes}, the first framework to achieve both high clean accuracy and certifiably robust accuracy for GNNs. It integrates randomized smoothing with two key components, \underline{au}gmentation and con\underline{dit}ional smoothing, aiming to improve data quality and prediction consistency. The augmentation, acting as a pre-processing step, de-noises the randomized graph, significantly improving data quality and clean accuracy. The conditional smoothing, serving as a post-processing step, employs a filtering function to selectively count votes, thereby filtering low-quality predictions and improving voting consistency. Extensive experimental results demonstrate that AuditVotes significantly enhances clean accuracy, certified robustness, and empirical robustness while maintaining high computational efficiency. Notably, compared to baseline randomized smoothing, AuditVotes improves clean accuracy by $437.1\%$ and certified accuracy by $409.3\%$ when the attacker can arbitrarily insert $20$ edges on the Cora-ML datasets, representing a substantial step toward deploying certifiably robust GNNs in real-world applications.
翻译:尽管图神经网络(GNNs)取得了进展,但其鲁棒性仍持续受到自适应攻击的挑战。基于随机平滑的认证鲁棒性已成为一种有前景的解决方案,它能够提供可证明的保证,确保模型在特定范围内的对抗扰动下预测保持稳定。然而,现有方法面临准确性与鲁棒性之间的关键权衡:实现更强的鲁棒性需要在输入图中引入更大的噪声。这种过度的随机化会降低数据质量并破坏预测一致性,从而限制了认证鲁棒性图神经网络在准确性与鲁棒性均至关重要的实际场景中的部署。为解决这一挑战,我们提出了\textbf{AuditVotes},这是首个能够同时实现图神经网络高清洁准确率与认证鲁棒准确率的框架。该框架将随机平滑与两个关键组件——\underline{au}增强与条件\underline{dit}平滑——相结合,旨在提升数据质量与预测一致性。增强作为预处理步骤,对随机化图进行去噪,显著提升数据质量与清洁准确率。条件平滑作为后处理步骤,采用过滤函数选择性计票,从而过滤低质量预测并提升投票一致性。大量实验结果表明,AuditVotes在保持高计算效率的同时,显著提升了清洁准确率、认证鲁棒性与经验鲁棒性。值得注意的是,在Cora-ML数据集上攻击者可任意插入$20$条边的情况下,与基线随机平滑方法相比,AuditVotes将清洁准确率提升了$437.1\%$,认证准确率提升了$409.3\%$,这标志着向实际应用中部署认证鲁棒性图神经网络迈出了重要一步。