Conformal prediction has become increasingly popular for quantifying the uncertainty associated with machine learning models. Recent work in graph uncertainty quantification has built upon this approach for conformal graph prediction. The nascent nature of these explorations has led to conflicting choices for implementations, baselines, and method evaluation. In this work, we analyze the design choices made in the literature and discuss the tradeoffs associated with existing methods. Building on the existing implementations for existing methods, we introduce techniques to scale existing methods to large-scale graph datasets without sacrificing performance. Our theoretical and empirical results justify our recommendations for future scholarship in graph conformal prediction.
翻译:置信预测已成为量化机器学习模型不确定性的日益流行的方法。近期在图不确定性量化领域的研究基于此方法发展了图置信预测。这些探索的新兴性质导致了在实现方式、基线模型和方法评估方面存在相互冲突的选择。本文分析了文献中的设计选择,并讨论了现有方法所涉及的权衡。基于现有方法的实现,我们提出了在不牺牲性能的前提下将现有方法扩展至大规模图数据集的技术。我们的理论和实证结果验证了我们对图置信预测未来研究的建议。