Graph Transformers (GTs) have demonstrated significant advantages in graph representation learning through their global attention mechanisms. However, the self-attention mechanism in GTs tends to neglect the inductive biases inherent in graph structures, making it chanllenging to effectively capture essential structural information. To address this issue, we propose a novel approach that integrate graph inductive bias into self-attention mechanisms by leveraging quantum technology for structural encoding. In this paper, we introduce the Graph Quantum Walk Transformer (GQWformer), a groundbreaking GNN framework that utilizes quantum walks on attributed graphs to generate node quantum states. These quantum states encapsulate rich structural attributes and serve as inductive biases for the transformer, thereby enabling the generation of more meaningful attention scores. By subsequently incorporating a recurrent neural network, our design amplifies the model's ability to focus on both local and global information. We conducted comprehensive experiments across five publicly available datasets to evaluate the effectiveness of our model. These results clearly indicate that GQWformer outperforms existing state-of-the-art graph classification algorithms. These findings highlight the significant potential of integrating quantum computing methodologies with traditional GNNs to advance the field of graph representation learning, providing a promising direction for future research and applications.
翻译:图Transformer(GTs)通过其全局注意力机制在图表示学习中展现出显著优势。然而,GTs中的自注意力机制往往忽略图结构固有的归纳偏置,导致难以有效捕获关键的结构信息。为解决这一问题,我们提出一种创新方法,通过利用量子技术进行结构编码,将图归纳偏置整合到自注意力机制中。本文提出图量子游走Transformer(GQWformer),这是一种突破性的图神经网络框架,利用属性图上的量子游走生成节点量子态。这些量子态封装了丰富的结构属性,并作为Transformer的归纳偏置,从而能够生成更具意义的注意力分数。通过后续引入循环神经网络,我们的设计增强了模型对局部和全局信息的聚焦能力。我们在五个公开数据集上进行了全面实验以评估模型效能。结果表明,GQWformer在性能上显著优于现有的先进图分类算法。这些发现凸显了将量子计算方法与传统图神经网络相结合以推动图表示学习领域的巨大潜力,为未来研究和应用提供了前景广阔的发展方向。