A wide range of graph embedding objectives decompose into two components: one that attracts the embeddings of nodes that are perceived as similar, and another that repels embeddings of nodes that are perceived as dissimilar. Because real-world graphs are sparse and the number of dissimilar pairs grows quadratically with the number of nodes, Skip-Gram Negative Sampling (SGNS) has emerged as a popular and efficient repulsion approach. SGNS repels each node from a sample of dissimilar nodes, as opposed to all dissimilar nodes. In this work, we show that node-wise repulsion is, in aggregate, an approximate re-centering of the node embedding dimensions. Such dimension operations are much more scalable than node operations. The dimension approach, in addition to being more efficient, yields a simpler geometric interpretation of the repulsion. Our result extends findings from the self-supervised learning literature to the skip-gram model, establishing a connection between skip-gram node contrast and dimension regularization. We show that in the limit of large graphs, under mild regularity conditions, the original node repulsion objective converges to optimization with dimension regularization. We use this observation to propose an algorithm augmentation framework that speeds up any existing algorithm, supervised or unsupervised, using SGNS. The framework prioritizes node attraction and replaces SGNS with dimension regularization. We instantiate this generic framework for LINE and node2vec and show that the augmented algorithms preserve downstream performance while dramatically increasing efficiency.
翻译:广泛的图嵌入目标可分解为两个组成部分:一个吸引被视为相似的节点嵌入,另一个排斥被视为不相似的节点嵌入。由于真实世界图是稀疏的,且不相似节点对的数量随节点数呈平方增长,Skip-Gram负采样(SGNS)已成为一种流行且高效的排斥方法。SGNS将每个节点与一个不相似节点样本(而非所有不相似节点)进行排斥。本研究中,我们证明节点级排斥在整体上近似于节点嵌入维度的重新居中。这种维度操作比节点操作更具可扩展性。维度方法不仅更高效,还提供了排斥的简单几何解释。我们的结果将自监督学习文献中的发现扩展至skip-gram模型,建立了skip-gram节点对比与维度正则化之间的联系。我们证明,在大图极限条件下,在温和正则性假设下,原始节点排斥目标会收敛至维度正则化优化。基于此观察,我们提出一种算法增强框架,可加速任何使用SGNS的现有算法(监督或无监督)。该框架优先考虑节点吸引,并用维度正则化替代SGNS。我们针对LINE和node2vec实例化该通用框架,并证明增强算法在保持下游性能的同时显著提升效率。