We develop a fast and scalable numerical approach to solve Wasserstein gradient flows (WGFs), particularly suitable for high-dimensional cases. Our approach is to use general reduced-order models, like deep neural networks, to parameterize the push-forward maps such that they can push a simple reference density to the one solving the given WGF. The new dynamical system is called parameterized WGF (PWGF), and it is defined on the finite-dimensional parameter space equipped with a pullback Wasserstein metric. Our numerical scheme can approximate the solutions of WGFs for general energy functionals effectively, without requiring spatial discretization or nonconvex optimization procedures, thus avoiding some limitations of classical numerical methods and more recent deep-learning-based approaches. A comprehensive analysis of the approximation errors measured by Wasserstein distance is also provided in this work. Numerical experiments show promising computational efficiency and verified accuracy on various WGF examples using our approach.
翻译:本文提出了一种快速且可扩展的数值方法用于求解Wasserstein梯度流(WGF),尤其适用于高维情形。我们的方法采用广义降阶模型(如深度神经网络)对前推映射进行参数化,使其能够将简单参考密度推前至求解给定WGF的密度分布。该新动力系统称为参数化Wasserstein梯度流(PWGF),其定义于配备拉回Wasserstein度量的有限维参数空间上。我们的数值方案能有效逼近具有一般能量泛函的WGF解,无需空间离散化或非凸优化过程,从而规避了经典数值方法与近期基于深度学习方法的若干局限。本文还提供了以Wasserstein距离度量的近似误差的全面分析。数值实验表明,该方法在多种WGF算例中展现出良好的计算效率与经验证的精度。