Machine learning on graphs, especially using graph neural networks (GNNs), has seen a surge in interest due to the wide availability of graph data across a broad spectrum of disciplines, from life to social and engineering sciences. Despite their practical success, our theoretical understanding of the properties of GNNs remains highly incomplete. Recent theoretical advancements primarily focus on elucidating the coarse-grained expressive power of GNNs, predominantly employing combinatorial techniques. However, these studies do not perfectly align with practice, particularly in understanding the generalization behavior of GNNs when trained with stochastic first-order optimization techniques. In this position paper, we argue that the graph machine learning community needs to shift its attention to developing a balanced theory of graph machine learning, focusing on a more thorough understanding of the interplay of expressive power, generalization, and optimization.
翻译:图机器学习,特别是使用图神经网络(GNNs),由于从生命科学到社会科学及工程学等广泛学科中图数据的普遍可用性,已引起研究兴趣的激增。尽管在实践中取得了成功,我们对GNNs性质的理论理解仍然极不完整。近期的理论进展主要集中于阐明GNNs的粗粒度表达能力,并主要采用组合技术。然而,这些研究与实际应用并不完全吻合,特别是在理解使用随机一阶优化技术训练时GNNs的泛化行为方面。在本立场论文中,我们认为图机器学习社区需要将注意力转向发展一种均衡的图机器学习理论,重点关注对表达能力、泛化性与优化之间相互作用的更深入理解。