Various graph neural networks (GNNs) with advanced training techniques and model designs have been proposed for link prediction tasks. However, outdated baseline models may lead to an overestimation of the benefits provided by these novel approaches. To address this, we systematically investigate the potential of Graph Autoencoders (GAE) by meticulously tuning hyperparameters and utilizing the trick of orthogonal embedding and linear propagation. Our findings reveal that a well-optimized GAE can match the performance of more complex models while offering greater computational efficiency.
翻译:针对链接预测任务,已有多种结合先进训练技术与模型设计的图神经网络被提出。然而,过时的基线模型可能导致对这些新方法优势的高估。为此,我们通过细致调整超参数,并利用正交嵌入与线性传播技巧,系统性地探究了图自编码器的潜力。研究结果表明,经过充分优化的图自编码器在保持更高计算效率的同时,其性能可与更复杂的模型相媲美。