Quadratic programming (QP) forms a crucial foundation in optimization, encompassing a broad spectrum of domains and serving as the basis for more advanced algorithms. Consequently, as the scale and complexity of modern applications continue to grow, the development of efficient and reliable QP algorithms is becoming increasingly vital. In this context, this paper introduces a novel deep learning-aided distributed optimization architecture designed for tackling large-scale QP problems. First, we combine the state-of-the-art Operator Splitting QP (OSQP) method with a consensus approach to derive DistributedQP, a new method tailored for network-structured problems, with convergence guarantees to optimality. Subsequently, we unfold this optimizer into a deep learning framework, leading to DeepDistributedQP, which leverages learned policies to accelerate reaching to desired accuracy within a restricted amount of iterations. Our approach is also theoretically grounded through Probably Approximately Correct (PAC)-Bayes theory, providing generalization bounds on the expected optimality gap for unseen problems. The proposed framework, as well as its centralized version DeepQP, significantly outperform their standard optimization counterparts on a variety of tasks such as randomly generated problems, optimal control, linear regression, transportation networks and others. Notably, DeepDistributedQP demonstrates strong generalization by training on small problems and scaling to solve much larger ones (up to 50K variables and 150K constraints) using the same policy. Moreover, it achieves orders-of-magnitude improvements in wall-clock time compared to OSQP. The certifiable performance guarantees of our approach are also demonstrated, ensuring higher-quality solutions over traditional optimizers.
翻译:二次规划(QP)构成了优化领域的关键基础,涵盖广泛的应用领域,并作为更高级算法的基础。因此,随着现代应用规模和复杂性的持续增长,开发高效可靠的QP算法变得日益重要。在此背景下,本文提出了一种新颖的深度学习辅助分布式优化架构,旨在解决大规模QP问题。首先,我们将最先进的算子分裂QP(OSQP)方法与共识方法相结合,推导出DistributedQP——一种专为网络结构问题设计的新方法,并具有收敛至最优性的理论保证。随后,我们将该优化器展开为深度学习框架,得到DeepDistributedQP,其利用学习到的策略在有限迭代次数内加速达到期望精度。我们的方法还通过概率近似正确(PAC)-贝叶斯理论建立了理论基础,为未见问题的期望最优性差距提供了泛化界。所提出的框架及其集中式版本DeepQP,在随机生成问题、最优控制、线性回归、交通网络等多种任务上均显著优于其标准优化对应方法。值得注意的是,DeepDistributedQP通过在小型问题上训练并利用相同策略扩展至解决更大规模问题(高达5万个变量和15万个约束),展现出强大的泛化能力。此外,与OSQP相比,它在实际运行时间上实现了数量级的提升。我们还验证了该方法可证明的性能保证,确保其相比传统优化器能获得更高质量的解决方案。