Quantum Graph Neural Networks (QGNNs) offer a promising approach to combining quantum computing with graph-structured data processing. While classical Graph Neural Networks (GNNs) are scalable and robust, existing QGNNs often lack flexibility due to graph-specific quantum circuit designs, limiting their applicability to diverse real-world problems. To address this, we propose a versatile QGNN framework inspired by GraphSAGE, using quantum models as aggregators. We integrate inductive representation learning techniques with parameterized quantum convolutional and pooling layers, bridging classical and quantum paradigms. The convolutional layer is flexible, allowing tailored designs for specific tasks. Benchmarked on a node regression task with the QM9 dataset, our framework, using a single minimal circuit for all aggregation steps, handles molecules with varying numbers of atoms without changing qubits or circuit architecture. While classical GNNs achieve higher training performance, our quantum approach remains competitive and often shows stronger generalization as molecular complexity increases. We also observe faster learning in early training epochs. To mitigate trainability limitations of a single-circuit setup, we extend the framework with multiple quantum aggregators on QM9. Assigning distinct circuits to each hop substantially improves training performance across all cases. Additionally, we numerically demonstrate the absence of barren plateaus as qubit numbers increase, suggesting that the proposed model can scale to larger, more complex graph-based problems.
翻译:量子图神经网络(QGNN)为结合量子计算与图结构数据处理提供了一种前景广阔的方法。尽管经典图神经网络(GNN)具有可扩展性和鲁棒性,但现有的QGNN常因图特定的量子电路设计而缺乏灵活性,限制了其在不同现实问题中的适用性。为解决此问题,我们提出了一种受GraphSAGE启发的通用QGNN框架,使用量子模型作为聚合器。我们将归纳表示学习技术与参数化量子卷积层和池化层相结合,从而桥接了经典与量子范式。该卷积层具有灵活性,允许针对特定任务进行定制化设计。在基于QM9数据集的节点回归任务上进行基准测试时,我们的框架在所有聚合步骤中仅使用单一最小电路,即可处理具有不同原子数量的分子,而无需改变量子比特数或电路架构。虽然经典GNN在训练性能上表现更优,但我们的量子方法仍具竞争力,且随着分子复杂性增加,通常展现出更强的泛化能力。我们还观察到在训练早期阶段学习速度更快。为缓解单电路设置的可训练性限制,我们在QM9数据集上扩展了框架,采用多个量子聚合器。为每一跳分配不同电路显著提升了所有情况下的训练性能。此外,我们通过数值模拟证明了随着量子比特数增加不存在贫瘠高原现象,这表明所提出的模型能够扩展至更大、更复杂的图相关问题。