Graph neural networks (GNNs) are increasingly applied to hard optimization problems, often claiming superiority over classical heuristics. However, such claims risk being unsolid due to a lack of standard benchmarks on truly hard instances. From a statistical physics perspective, we propose new hard benchmarks based on random problems. We provide these benchmarks, along with performance results from both classical heuristics and GNNs. Our fair comparison shows that classical algorithms still outperform GNNs. We discuss the challenges for neural networks in this domain. Future claims of superiority can be made more robust using our benchmarks, available at https://github.com/ArtLabBocconi/RandCSPBench.
翻译:图神经网络(GNNs)在求解困难优化问题中的应用日益增多,常声称其性能优于经典启发式算法。然而,由于缺乏针对真正困难实例的标准基准,此类论断可能缺乏坚实依据。从统计物理学的视角出发,我们基于随机问题提出了一套新的困难基准。我们提供了这些基准,以及经典启发式算法和GNNs的性能结果。我们的公平比较表明,经典算法仍然优于GNNs。我们讨论了神经网络在这一领域面临的挑战。未来关于性能优越性的论断可通过使用我们的基准(可在 https://github.com/ArtLabBocconi/RandCSPBench 获取)而更具说服力。