Learning-based methods have gained attention as general-purpose solvers due to their ability to automatically learn problem-specific heuristics, reducing the need for manually crafted heuristics. However, these methods often face scalability challenges. To address these issues, the improved Sampling algorithm for Combinatorial Optimization (iSCO), using discrete Langevin dynamics, has been proposed, demonstrating better performance than several learning-based solvers. This study proposes a different approach that integrates gradient-based update through continuous relaxation, combined with Quasi-Quantum Annealing (QQA). QQA smoothly transitions the objective function, starting from a simple convex function, minimized at half-integral values, to the original objective function, where the relaxed variables are minimized only in the discrete space. Furthermore, we incorporate parallel run communication leveraging GPUs to enhance exploration capabilities and accelerate convergence. Numerical experiments demonstrate that our method is a competitive general-purpose solver, achieving performance comparable to iSCO and learning-based solvers across various benchmark problems. Notably, our method exhibits superior speed-quality trade-offs for large-scale instances compared to iSCO, learning-based solvers, commercial solvers, and specialized algorithms.
翻译:基于学习的方法因其能够自动学习问题特定的启发式策略,减少对手工设计启发式的依赖,已作为通用求解器受到关注。然而,这些方法常面临可扩展性挑战。为解决这些问题,采用离散朗之万动力学的改进组合优化采样算法(iSCO)被提出,其性能优于多种基于学习的求解器。本研究提出一种不同的方法,通过连续松弛结合梯度更新,并与拟量子退火(QQA)相融合。QQA平滑地转换目标函数:从一个简单的凸函数(在半整数值处最小化)开始,逐步过渡到原始目标函数,其中松弛变量仅在离散空间中被最小化。此外,我们利用GPU实现并行运行通信,以增强探索能力并加速收敛。数值实验表明,我们的方法是一种具有竞争力的通用求解器,在多种基准问题上取得了与iSCO及基于学习的求解器相当的性能。值得注意的是,与iSCO、基于学习的求解器、商业求解器以及专用算法相比,我们的方法在大规模实例上展现出更优的速度-质量权衡。