We demonstrate that a single training trajectory can transform a graph neural network into an unsupervised heuristic for combinatorial optimization. Focusing on the Travelling Salesman Problem, we show that encoding global structural constraints as an inductive bias enables a non-autoregressive model to generate solutions via direct forward passes, without search, supervision, or sequential decision-making. At inference time, dropout and snapshot ensembling allow a single model to act as an implicit ensemble, reducing optimality gaps through increased solution diversity. Our results establish that graph neural networks do not require supervised training nor explicit search to be effective. Instead, they can internalize global combinatorial structure and function as strong, learned heuristics. This reframes the role of learning in combinatorial optimization: from augmenting classical algorithms to directly instantiating new heuristics.
翻译:我们证明,单一训练轨迹可将图神经网络转化为组合优化的无监督启发式算法。聚焦于旅行商问题,研究表明将全局结构约束编码为归纳偏置能使非自回归模型通过直接前向传播生成解,无需搜索、监督或序列决策。在推理阶段,dropout与快照集成技术使单一模型可充当隐式集成系统,通过提升解的多样性来减小最优性差距。我们的结果表明,图神经网络无需监督训练或显式搜索即可发挥效用。相反,它们能够内化全局组合结构,并作为强大的学习型启发式算法运行。这重新定义了学习在组合优化中的作用:从增强经典算法转变为直接实例化新型启发式算法。