Graph Neural Networks (GNNs) are non-Euclidean deep learning models for graph-structured data. Despite their successful and diverse applications, oversmoothing prohibits deep architectures due to node features converging to a single fixed point. This severely limits their potential to solve complex tasks. To counteract this tendency, we propose a plug-and-play module consisting of three steps: Cluster-Normalize-Activate (CNA). By applying CNA modules, GNNs search and form super nodes in each layer, which are normalized and activated individually. We demonstrate in node classification and property prediction tasks that CNA significantly improves the accuracy over the state-of-the-art. Particularly, CNA reaches 94.18% and 95.75% accuracy on Cora and CiteSeer, respectively. It further benefits GNNs in regression tasks as well, reducing the mean squared error compared to all baselines. At the same time, GNNs with CNA require substantially fewer learnable parameters than competing architectures.
翻译:图神经网络(GNNs)是面向图结构数据的非欧几里得深度学习模型。尽管其应用广泛且成功,但过度平滑问题会阻碍深层架构的实现,因为节点特征会收敛至单一不动点。这严重限制了其解决复杂任务的潜力。为了抵消这一趋势,我们提出了一种即插即用模块,包含三个步骤:聚类-归一化-激活(CNA)。通过应用CNA模块,GNNs在每一层中搜索并形成超节点,这些超节点被单独归一化和激活。我们在节点分类和属性预测任务中证明,CNA显著超越了现有最佳方法的准确率。具体而言,CNA在Cora和CiteSeer数据集上分别达到了94.18%和95.75%的准确率。它同样提升了GNNs在回归任务中的表现,与所有基线方法相比均降低了均方误差。同时,采用CNA的GNNs所需的可学习参数量远少于竞争架构。