Graph Neural Networks (GNNs) are key tools for graph representation learning, demonstrating strong results across diverse prediction tasks. In this paper, we present Convexified Message-Passing Graph Neural Networks (CGNNs), a novel and general framework that combines the power of message-passing GNNs with the tractability of convex optimization. By mapping their nonlinear filters into a reproducing kernel Hilbert space, CGNNs transform training into a convex optimization problem, which projected gradient methods can solve both efficiently and optimally. Convexity further allows CGNNs' statistical properties to be analyzed accurately and rigorously. For two-layer CGNNs, we establish rigorous generalization guarantees, showing convergence to the performance of an optimal GNN. To scale to deeper architectures, we adopt a principled layer-wise training strategy. Experiments on benchmark datasets show that CGNNs significantly exceed the performance of leading GNN models, obtaining 10-40% higher accuracy in most cases, underscoring their promise as a powerful and principled method with strong theoretical foundations. In rare cases where improvements are not quantitatively substantial, the convex models either slightly exceed or match the baselines, stressing their robustness and wide applicability. Though over-parameterization is often used to enhance performance in non-convex models, we show that our CGNNs yield shallow convex models that can surpass non-convex ones in accuracy and model compactness.
翻译:图神经网络(GNNs)是图表示学习的关键工具,在多种预测任务中展现出卓越性能。本文提出凸化消息传递图神经网络(CGNNs),这是一个新颖且通用的框架,将消息传递GNNs的强大能力与凸优化的易处理性相结合。通过将其非线性滤波器映射到再生核希尔伯特空间,CGNNs将训练转化为凸优化问题,投影梯度方法能够高效且最优地求解。凸性进一步使得CGNNs的统计性质可被精确严谨地分析。对于两层CGNNs,我们建立了严格的泛化保证,证明其收敛至最优GNN的性能。为扩展到更深层架构,我们采用一种有原则的逐层训练策略。在基准数据集上的实验表明,CGNNs显著超越领先的GNN模型,在大多数情况下获得10-40%的准确率提升,凸显其作为一种具有坚实理论基础、强大且原理性方法的潜力。在少数改进不显著的案例中,凸模型或略微超越或持平基线,强调了其鲁棒性与广泛适用性。尽管过参数化常被用于提升非凸模型的性能,但我们证明CGNNs可产生浅层凸模型,在准确率与模型紧凑性上超越非凸模型。