Wasserstein gradient flows of maximum mean discrepancy (MMD) functionals with non-smooth Riesz kernels show a rich structure as singular measures can become absolutely continuous ones and conversely. In this paper we contribute to the understanding of such flows. We propose to approximate the backward scheme of Jordan, Kinderlehrer and Otto for computing such Wasserstein gradient flows as well as a forward scheme for so-called Wasserstein steepest descent flows by neural networks (NNs). Since we cannot restrict ourselves to absolutely continuous measures, we have to deal with transport plans and velocity plans instead of usual transport maps and velocity fields. Indeed, we approximate the disintegration of both plans by generative NNs which are learned with respect to appropriate loss functions. In order to evaluate the quality of both neural schemes, we benchmark them on the interaction energy. Here we provide analytic formulas for Wasserstein schemes starting at a Dirac measure and show their convergence as the time step size tends to zero. Finally, we illustrate our neural MMD flows by numerical examples.
翻译:具有非光滑Riesz核的最大均值差异泛函的Wasserstein梯度流展现出丰富结构:奇异测度可转化为绝对连续测度,反之亦然。本文致力于理解此类梯度流的特性。我们提出采用Jordan-Kinderlehrer-Otto后向格式的近似方法计算此类Wasserstein梯度流,以及利用神经网络(NNs)实现所谓Wasserstein最速下降流的前向格式。由于无法将研究对象局限于绝对连续测度,我们需处理传输计划与速度计划,而非常规的传输映射与速度场。具体而言,我们通过生成式神经网络近似分解这两种计划,并基于合适的损失函数进行学习。为评估两种神经格式的性能,我们以相互作用能为基准进行测试。在此过程中,我们推导出以Dirac测度为起点的Wasserstein格式解析表达式,并证明当时步步长趋于零时其收敛性。最后,通过数值算例展示所提出的神经MMD梯度流。