Recent work on neural algorithmic reasoning has demonstrated that graph neural networks (GNNs) could learn to execute classical algorithms. Doing so, however, has always used a recurrent architecture, where each iteration of the GNN aligns with an algorithm's iteration. Since an algorithm's solution is often an equilibrium, we conjecture and empirically validate that one can train a network to solve algorithmic problems by directly finding the equilibrium. Note that this does not require matching each GNN iteration with a step of the algorithm.
翻译:近期关于神经算法推理的研究表明,图神经网络(GNN)能够学习执行经典算法。然而,现有方法始终采用循环架构,要求GNN的每次迭代与算法的单步迭代严格对齐。由于算法解通常表现为均衡状态,我们提出猜想并通过实验验证:可直接通过寻找均衡态来训练网络解决算法问题。值得注意的是,该方法无需将GNN的每次迭代与算法的单步执行步骤相匹配。