Recently, message-passing graph neural networks (MPNNs) have shown potential for solving combinatorial and continuous optimization problems due to their ability to capture variable-constraint interactions. While existing approaches leverage MPNNs to approximate solutions or warm-start traditional solvers, they often lack guarantees for feasibility, particularly in convex optimization settings. Here, we propose an iterative MPNN framework to solve convex optimization problems with provable feasibility guarantees. First, we demonstrate that MPNNs can provably simulate standard interior-point methods for solving quadratic problems with linear constraints, covering relevant problems such as SVMs. Secondly, to ensure feasibility, we introduce a variant that starts from a feasible point and iteratively restricts the search within the feasible region. Experimental results show that our approach outperforms existing neural baselines in solution quality and feasibility, generalizes well to unseen problem sizes, and, in some cases, achieves faster solution times than state-of-the-art solvers such as Gurobi.
翻译:近年来,消息传递图神经网络(MPNNs)因其捕捉变量-约束相互作用的能力,在求解组合优化和连续优化问题上展现出潜力。现有方法通常利用MPNNs近似求解或为传统求解器提供热启动,但往往缺乏可行性保证,尤其在凸优化场景中。本文提出一种迭代式MPNN框架,用于求解具有可证明可行性保证的凸优化问题。首先,我们证明MPNNs能够可证明地模拟标准内点法来求解带线性约束的二次规划问题,涵盖支持向量机(SVMs)等相关问题。其次,为确保可行性,我们提出一种变体:从可行点出发,在可行域内迭代地限制搜索范围。实验结果表明,我们的方法在解的质量和可行性方面优于现有神经基线,对未见问题规模具有良好的泛化能力,并且在某些情况下比Gurobi等先进求解器获得更快的求解速度。