The Jordan-Kinderlehrer-Otto (JKO) scheme provides a stable variational framework for computing Wasserstein gradient flows, but its practical use is often limited by the high computational cost of repeatedly solving the JKO subproblems. We propose a self-supervised approach for learning a JKO solution operator without requiring numerical solutions of any JKO trajectories. The learned operator maps an input density directly to the minimizer of the corresponding JKO subproblem, and can be iteratively applied to efficiently generate the gradient-flow evolution. A key challenge is that only a number of initial densities are typically available for training. To address this, we introduce a Learn-to-Evolve algorithm that jointly learns the JKO operator and its induced trajectories by alternating between trajectory generation and operator updates. As training progresses, the generated data increasingly approximates true JKO trajectories. Meanwhile, this Learn-to-Evolve strategy serves as a natural form of data augmentation, significantly enhancing the generalization ability of the learned operator. Numerical experiments demonstrate the accuracy, stability, and robustness of the proposed method across various choices of energies and initial conditions.
翻译:Jordan-Kinderlehrer-Otto (JKO) 方案为计算Wasserstein梯度流提供了一个稳定的变分框架,但其实际应用常受限于重复求解JKO子问题的高计算成本。我们提出一种自监督方法,用于学习JKO解算子,而无需任何JKO轨迹的数值解。该学习到的算子将输入密度直接映射到相应JKO子问题的最小化解,并可迭代应用以高效生成梯度流演化。一个关键挑战是,通常只有有限的初始密度可用于训练。为解决此问题,我们引入了一种学习演化算法,通过交替进行轨迹生成和算子更新,联合学习JKO算子及其诱导的轨迹。随着训练的进行,生成的数据越来越接近真实的JKO轨迹。同时,这种学习演化策略作为一种自然的数据增强形式,显著提升了所学算子的泛化能力。数值实验证明了所提方法在不同能量函数和初始条件选择下的准确性、稳定性和鲁棒性。