We present a new approach for Neural Optimal Transport (NOT) training procedure, capable of accurately and efficiently estimating optimal transportation plan via specific regularization on dual Kantorovich potentials. The main bottleneck of existing NOT solvers is associated with the procedure of finding a near-exact approximation of the conjugate operator (i.e., the c-transform), which is done either by optimizing over non-convex max-min objectives or by the computationally intensive fine-tuning of the initial approximated prediction. We resolve both issues by proposing a new, theoretically justified loss in the form of expectile regularisation which enforces binding conditions on the learning process of dual potentials. Such a regularization provides the upper bound estimation over the distribution of possible conjugate potentials and makes the learning stable, completely eliminating the need for additional extensive fine-tuning. Proposed method, called Expectile-Regularised Neural Optimal Transport (ENOT), outperforms previous state-of-the-art approaches on the established Wasserstein-2 benchmark tasks by a large margin (up to a 3-fold improvement in quality and up to a 10-fold improvement in runtime). Moreover, we showcase performance of ENOT for varying cost functions on different tasks such as image generation, showing robustness of proposed algorithm.
翻译:我们提出了一种神经最优传输(NOT)训练过程的新方法,该方法能够通过对偶康托洛维奇势函数施加特定正则化,准确高效地估计最优传输方案。现有NOT求解器的主要瓶颈在于寻找共轭算子(即c变换)的近似精确逼近过程,该过程要么需要优化非凸的极大极小目标,要么需要对初始近似预测进行计算密集型的精细调优。我们通过提出一种理论上可证明的期望分位数正则化形式的新损失函数来解决这两个问题,该正则化对偶势函数的学习过程施加了约束条件。这种正则化为可能的共轭势函数分布提供了上界估计,使学习过程保持稳定,完全消除了额外大量精细调优的需求。所提出的方法称为期望分位数正则化神经最优传输(ENOT),在已建立的Wasserstein-2基准任务上以显著优势超越先前最先进方法(质量提升高达3倍,运行时间改善高达10倍)。此外,我们展示了ENOT在不同成本函数下(如图像生成等任务)的性能表现,证明了所提算法的鲁棒性。