In domains with interdependent data, such as graphs, quantifying the epistemic uncertainty of a Graph Neural Network (GNN) is challenging as uncertainty can arise at different structural scales. Existing techniques neglect this issue or only distinguish between structure-aware and structure-agnostic uncertainty without combining them into a single measure. We propose GEBM, an energy-based model (EBM) that provides high-quality uncertainty estimates by aggregating energy at different structural levels that naturally arise from graph diffusion. In contrast to logit-based EBMs, we provably induce an integrable density in the data space by regularizing the energy function. We introduce an evidential interpretation of our EBM that significantly improves the predictive robustness of the GNN. Our framework is a simple and effective post hoc method applicable to any pre-trained GNN that is sensitive to various distribution shifts. It consistently achieves the best separation of in-distribution and out-of-distribution data on 6 out of 7 anomaly types while having the best average rank over shifts on \emph{all} datasets.
翻译:在具有相互依赖数据的领域(如图结构),量化图神经网络(GNN)的认知不确定性具有挑战性,因为不确定性可能出现在不同的结构尺度上。现有技术要么忽略此问题,要么仅区分结构感知与结构无关的不确定性,而未将其整合为单一度量。我们提出GEBM——一种基于能量的模型(EBM),该模型通过在图扩散自然产生的不同结构层级上聚合能量,提供高质量的不确定性估计。与基于logit的EBM不同,我们通过对能量函数进行正则化,可证明地在数据空间诱导出可积密度。我们为EBM引入了一种证据性解释,显著提升了GNN的预测鲁棒性。该框架是一种简单有效的后处理方法,适用于任何预训练的GNN,且对各类分布偏移敏感。在7种异常类型中的6种上,该方法始终实现最佳的内分布与外分布数据分离效果,并在\emph{所有}数据集的偏移评估中取得最优平均排名。