Quantum computing requires the optimization of control pulses to achieve high-fidelity quantum gates. We propose a machine learning-based protocol to address the challenges of evaluating gradients and modeling complex system dynamics. By training a recurrent neural network (RNN) to predict qubit behavior, our approach enables efficient gradient-based pulse optimization without the need for a detailed system model. First, we sample qubit dynamics using random control pulses with weak prior assumptions. We then train the RNN on the system's observed responses, and use the trained model to optimize high-fidelity control pulses. We demonstrate the effectiveness of this approach through simulations on a single $ST_0$ qubit.
翻译:量子计算需要优化控制脉冲以实现高保真度量子门。我们提出一种基于机器学习的协议,以解决梯度评估和复杂系统动力学建模的挑战。通过训练循环神经网络(RNN)来预测量子比特行为,我们的方法能够在无需详细系统模型的情况下实现高效的基于梯度的脉冲优化。首先,我们在弱先验假设下使用随机控制脉冲对量子比特动力学进行采样。随后,我们基于系统观测到的响应训练RNN,并利用训练好的模型优化高保真度控制脉冲。我们通过在单个$ST_0$量子比特上的仿真验证了该方法的有效性。