We propose a scalable framework for solving the Maximum Cut (MaxCut) problem in large graphs using projected gradient ascent on quadratic objectives. Our approach is differentiable and leverages GPUs for gradient-based optimization. It is not a machine learning method and does not require training data. Starting from a continuous relaxation of the classical quadratic binary formulation, we present a parallelized strategy that explores multiple initialization vectors in batch. We analyze the relaxed objective, showing it is convex and has fixed-points corresponding to local optima, particularly at boundary points, highlighting a key challenge in non-convex optimization. To improve exploration, we introduce a lifted quadratic formulation that over-parameterizes the solution space. We also provide a theoretical characterization of these lifted fixed-points. Finally, we propose DECO, a dimension-alternating algorithm that switches between the unlifted and lifted formulations, combined with importance-based degree initialization and a population-based evolutionary hyper-parameter search. Experiments on diverse graph families show that our methods attain comparable or superior performance relative to recent neural networks and GPU-accelerated sampling approaches.
翻译:我们提出了一种可扩展框架,用于通过二次目标函数的投影梯度上升法求解大规模图中的最大割问题。该方法具有可微分特性,并利用GPU进行基于梯度的优化。它并非机器学习方法,且无需训练数据。从经典二次二元公式的连续松弛出发,我们提出了一种并行化策略,可批量探索多个初始化向量。我们分析了松弛目标函数,证明其为凸函数且具有对应于局部最优解的固定点(尤其在边界点处),这凸显了非凸优化中的一个关键挑战。为提升探索能力,我们引入了一种升维二次公式,该公式对解空间进行了过参数化。我们还从理论上刻画了这些升维固定点的特性。最后,我们提出了DECO算法——一种在非升维与升维公式间交替切换的维度交替算法,并结合了基于重要度的度初始化策略以及基于种群的进化超参数搜索方法。在不同图族上的实验表明,相较于近期提出的神经网络方法与GPU加速采样方法,我们的方法能够取得相当或更优的性能。