Graph coarsening aims to reduce the size of a large graph while preserving some of its key properties, which has been used in many applications to reduce computational load and memory footprint. For instance, in graph machine learning, training Graph Neural Networks (GNNs) on coarsened graphs leads to drastic savings in time and memory. However, GNNs rely on the Message-Passing (MP) paradigm, and classical spectral preservation guarantees for graph coarsening do not directly lead to theoretical guarantees when performing naive message-passing on the coarsened graph. In this work, we propose a new message-passing operation specific to coarsened graphs, which exhibit theoretical guarantees on the preservation of the propagated signal. Interestingly, and in a sharp departure from previous proposals, this operation on coarsened graphs is oriented, even when the original graph is undirected. We conduct node classification tasks on synthetic and real data and observe improved results compared to performing naive message-passing on the coarsened graph.
翻译:图粗化的目标是在保持大图关键特性的同时缩减其规模,该方法已在众多应用中用于降低计算负载与内存占用。例如在图机器学习中,在粗化图上训练图神经网络(GNNs)可显著节省时间与内存。然而,GNNs依赖于消息传递(MP)范式,而传统图粗化的谱保持保证并不能直接转化为在粗化图上执行朴素消息传递时的理论保证。本研究提出一种专用于粗化图的新型消息传递运算,该运算在传播信号保持方面具有理论保证。值得注意的是,与先前方案截然不同的是,即使原始图是无向的,粗化图上的该运算也具有方向性。我们在合成数据与真实数据上开展节点分类任务,观察到相较于在粗化图上执行朴素消息传递,该方法取得了更好的结果。