Equivariant Graph Neural Networks (GNNs) that incorporate E(3) symmetry have achieved significant success in various scientific applications. As one of the most successful models, EGNN leverages a simple scalarization technique to perform equivariant message passing over only Cartesian vectors (i.e., 1st-degree steerable vectors), enjoying greater efficiency and efficacy compared to equivariant GNNs using higher-degree steerable vectors. This success suggests that higher-degree representations might be unnecessary. In this paper, we disprove this hypothesis by exploring the expressivity of equivariant GNNs on symmetric structures, including $k$-fold rotations and regular polyhedra. We theoretically demonstrate that equivariant GNNs will always degenerate to a zero function if the degree of the output representations is fixed to 1 or other specific values. Based on this theoretical insight, we propose HEGNN, a high-degree version of EGNN to increase the expressivity by incorporating high-degree steerable vectors while maintaining EGNN's efficiency through the scalarization trick. Our extensive experiments demonstrate that HEGNN not only aligns with our theoretical analyses on toy datasets consisting of symmetric structures, but also shows substantial improvements on more complicated datasets such as $N$-body and MD17. Our theoretical findings and empirical results potentially open up new possibilities for the research of equivariant GNNs.
翻译:结合E(3)对称性的等变图神经网络(GNNs)在各类科学应用中取得了显著成功。作为最成功的模型之一,EGNN采用一种简单的标量化技术,仅对笛卡尔向量(即一阶可导向量)执行等变消息传递,与使用更高阶可导向量的等变GNN相比,兼具更高效率与更好性能。这一成功似乎暗示更高维度的表示可能并非必需。本文通过探究等变GNN在对称结构(包括k重旋转与正多面体)上的表达能力,推翻了这一假设。我们从理论上证明:若输出表示的阶数固定为1或其他特定值,等变GNN将始终退化为零函数。基于这一理论洞见,我们提出了HEGNN——EGNN的高阶扩展版本,通过引入高阶可导向量以增强表达能力,同时借助标量化技巧保持EGNN的效率优势。大量实验表明,HEGNN不仅在由对称结构构成的玩具数据集上符合我们的理论分析,在更复杂的数据集(如N体系统和MD17)上也展现出显著性能提升。我们的理论发现与实证结果可能为等变GNN的研究开辟新的可能性。