Optimal Transport (OT) problems are a cornerstone of many applications, but solving them is computationally expensive. To address this problem, we propose UNOT (Universal Neural Optimal Transport), a novel framework capable of accurately predicting (entropic) OT distances and plans between discrete measures of variable resolution for a given cost function. UNOT builds on Fourier Neural Operators, a universal class of neural networks that map between function spaces and that are discretization-invariant, which enables our network to process measures of varying sizes. The network is trained adversarially using a second, generating network and a self-supervised bootstrapping loss. We theoretically justify the use of FNOs, prove that our generator is universal, and that minimizing the bootstrapping loss provably minimizes the ground truth loss. Through extensive experiments, we show that our network not only accurately predicts optimal transport distances and plans across a wide range of datasets, but also captures the geometry of the Wasserstein space correctly. Furthermore, we show that our network can be used as a state-of-the-art initialization for the Sinkhorn algorithm, significantly outperforming existing approaches.
翻译:最优传输(OT)问题是众多应用领域的基石,但其求解过程计算成本高昂。为解决这一问题,我们提出了UNOT(通用神经最优传输),这是一个新颖的框架,能够针对给定成本函数,精确预测离散测度(具有可变分辨率)之间的(熵正则化)OT距离与传输方案。UNOT建立在傅里叶神经算子(FNO)的基础上,FNO是一类通用的神经网络,可在函数空间之间进行映射且具有离散化不变性,这使得我们的网络能够处理不同规模的测度。该网络通过对抗训练方式,利用第二个生成网络和自监督的引导损失进行训练。我们从理论上论证了FNO的适用性,证明了我们的生成器具有通用性,并且最小化引导损失可证明能最小化真实损失。通过大量实验,我们表明该网络不仅能准确预测多种数据集上的最优传输距离与方案,还能正确捕捉Wasserstein空间的几何结构。此外,我们证明该网络可作为Sinkhorn算法的一种先进初始化方法,其性能显著优于现有方法。