To demonstrate supremacy of quantum computing, increasingly large-scale superconducting quantum computing chips are being designed and fabricated, sparking the demand for electronic design automation in pursuit of better efficiency and effectiveness. However, the complexity of simulating quantum systems poses a significant challenge to computer-aided design of quantum chips. Harnessing the scalability of graph neural networks (GNNs), we here propose a parameter designing algorithm for large-scale superconducting quantum circuits. The algorithm depends on the so-called 'three-stair scaling' mechanism, which comprises two neural-network models: an evaluator supervisedly trained on small-scale circuits for applying to medium-scale circuits, and a designer unsupervisedly trained on medium-scale circuits for applying to large-scale ones. We demonstrate our algorithm in mitigating quantum crosstalk errors, which are commonly present and closely related to the graph structures and parameter assignments of superconducting quantum circuits. Parameters for both single- and two-qubit gates are considered simultaneously. Numerical results indicate that the well-trained designer achieves notable advantages not only in efficiency but also in effectiveness, especially for large-scale circuits. For example, in superconducting quantum circuits consisting of around 870 qubits, the trained designer requires only 27 seconds to complete the frequency designing task which necessitates 90 minutes for the traditional Snake algorithm. More importantly, the crosstalk errors using our algorithm are only 51% of those produced by the Snake algorithm. Overall, this study initially demonstrates the advantages of applying graph neural networks to design parameters in quantum processors, and provides insights for systems where large-scale numerical simulations are challenging in electronic design automation.
翻译:为证明量子计算的优越性,设计并制造了规模日益增大的超导量子计算芯片,这推动了对电子设计自动化以追求更高效率和性能的需求。然而,模拟量子系统的复杂性对量子芯片的计算机辅助设计构成了重大挑战。利用图神经网络的可扩展性,本文提出了一种面向大规模超导量子电路的参数设计算法。该算法依赖于所谓的“三级缩放”机制,该机制包含两个神经网络模型:一个是在小规模电路上监督训练、用于中等规模电路的评估器,另一个是在中等规模电路上无监督训练、用于大规模电路的设计器。我们通过抑制量子串扰误差来验证算法,这类误差普遍存在且与超导量子电路的图结构和参数分配密切相关。算法同时考虑了单量子比特门和双量子比特门的参数。数值结果表明,训练良好的设计器不仅在效率上,而且在性能上都取得了显著优势,尤其对于大规模电路。例如,在包含约870个量子比特的超导量子电路中,训练好的设计器仅需27秒即可完成频率设计任务,而传统Snake算法需要90分钟。更重要的是,使用本算法产生的串扰误差仅为Snake算法的51%。总体而言,本研究初步展示了将图神经网络应用于量子处理器参数设计的优势,并为电子设计自动化中大规模数值模拟具有挑战性的系统提供了启示。