Node embeddings map graph vertices into low-dimensional Euclidean spaces while preserving structural information. They are central to tasks such as node classification, link prediction, and signal reconstruction. A key goal is to design node embeddings whose dot products capture meaningful notions of node similarity induced by the graph. Graph kernels offer a principled way to define such similarities, but their direct computation is often prohibitive for large networks. Inspired by random feature methods for kernel approximation in Euclidean spaces, we introduce randomized spectral node embeddings whose dot products estimate a low-rank approximation of any specific graph kernel. We provide theoretical and empirical results showing that our embeddings achieve more accurate kernel approximations than existing methods, particularly for spectrally localized kernels. These results demonstrate the effectiveness of randomized spectral constructions for scalable and principled graph representation learning.
翻译:节点嵌入将图顶点映射到低维欧几里得空间,同时保留结构信息。它们在节点分类、链接预测和信号重建等任务中至关重要。一个核心目标是设计节点嵌入,使其点积能够捕捉由图导出的有意义的节点相似性概念。图核提供了一种定义此类相似性的原则性方法,但其直接计算对于大型网络而言通常难以实现。受欧几里得空间中核近似随机特征方法的启发,我们引入了一种随机谱节点嵌入,其点积可估计任何特定图核的低秩近似。我们提供了理论和实证结果,表明我们的嵌入比现有方法实现了更精确的核近似,特别是对于谱局部化的核。这些结果证明了随机谱构造对于可扩展且原则性的图表示学习的有效性。