Computational optimal transport (OT) offers a principled framework for generative modeling. Neural OT methods, which use neural networks to learn an OT map (or potential) from data in an amortized way, can be evaluated out of sample after training, but existing approaches are tailored to Euclidean geometry. Extending neural OT to high-dimensional Riemannian manifolds remains an open challenge. In this paper, we prove that any method for OT on manifolds that produces discrete approximations of transport maps necessarily suffers from the curse of dimensionality: achieving a fixed accuracy requires a number of parameters that grows exponentially with the manifold dimension. Motivated by this limitation, we introduce Riemannian Neural OT (RNOT) maps, which are continuous neural-network parameterizations of OT maps on manifolds that avoid discretization and incorporate geometric structure by construction. Under mild regularity assumptions, we prove that RNOT maps approximate Riemannian OT maps with sub-exponential complexity in the dimension. Experiments on synthetic and real datasets demonstrate improved scalability and competitive performance relative to discretization-based baselines.
翻译:计算最优传输(OT)为生成建模提供了一个原则性框架。神经OT方法通过神经网络以摊销方式从数据中学习OT映射(或势函数),训练后可在样本外进行评估,但现有方法均针对欧几里得几何设计。将神经OT扩展到高维黎曼流形仍然是一个开放挑战。本文证明,任何在流形上进行OT并产生传输映射离散近似的方法,都必然遭受维度灾难:要达到固定精度,所需参数数量随流形维度呈指数增长。受此局限性的启发,我们提出了黎曼神经OT(RNOT)映射,这是一种流形上OT映射的连续神经网络参数化方法,它避免了离散化,并通过构造融入了几何结构。在温和的正则性假设下,我们证明RNOT映射能以维度上的亚指数复杂度逼近黎曼OT映射。在合成和真实数据集上的实验表明,相对于基于离散化的基线方法,该方法具有更好的可扩展性和有竞争力的性能。