Quantum computing promises to revolutionize various fields, yet the execution of quantum programs necessitates an effective compilation process. This involves strategically mapping quantum circuits onto the physical qubits of a quantum processor. The qubits' arrangement, or topology, is pivotal to the circuit's performance, a factor that often defies traditional heuristic or manual optimization methods due to its complexity. In this study, we introduce a novel approach leveraging reinforcement learning to dynamically tailor qubit topologies to the unique specifications of individual quantum circuits, guiding algorithm-driven quantum processor topology design for reducing the depth of mapped circuit, which is particularly critical for the output accuracy on noisy quantum processors. Our method marks a significant departure from previous methods that have been constrained to mapping circuits onto a fixed processor topology. Experiments demonstrate that we have achieved notable enhancements in circuit performance, with a minimum of 20\% reduction in circuit depth in 60\% of the cases examined, and a maximum enhancement of up to 46\%. Furthermore, the pronounced benefits of our approach in reducing circuit depth become increasingly evident as the scale of the quantum circuits increases, exhibiting the scalability of our method in terms of problem size. This work advances the co-design of quantum processor architecture and algorithm mapping, offering a promising avenue for future research and development in the field.
翻译:量子计算有望彻底变革多个领域,然而量子程序的执行需要一个有效的编译过程。这涉及将量子电路策略性地映射到量子处理器的物理量子比特上。量子比特的排列方式(即拓扑结构)对电路性能至关重要,其复杂性往往使传统的启发式或手动优化方法难以应对。本研究提出一种新颖方法,利用强化学习根据单个量子电路的具体规格动态定制量子比特拓扑,从而引导算法驱动的量子处理器拓扑设计以降低映射电路的深度——这对于噪声量子处理器上的输出精度尤为关键。我们的方法显著区别于以往局限于将电路映射到固定处理器拓扑的方法。实验表明,我们在电路性能上取得了显著提升:在60%的测试案例中电路深度至少降低20%,最高提升幅度可达46%。此外,随着量子电路规模的增大,本方法在降低电路深度方面的优势愈发明显,展现了其在问题规模上的可扩展性。这项工作推动了量子处理器架构与算法映射的协同设计,为该领域的未来研究与发展提供了前景广阔的新途径。