Riemannian flow matching (RFM) extends flow-based generative modeling to data supported on manifolds by learning a time-dependent tangent vector field whose flow-ODE transports a simple base distribution to the data law. We develop a nonasymptotic Total Variation (TV) convergence analysis for RFM samplers that use a learned vector field together with Euler discretization on manifolds. Our key technical ingredient is a differential inequality governing the evolution of TV between two manifold ODE flows, which expresses the time-derivative of TV through the divergence of the vector-field mismatch and the score of the reference flow; controlling these terms requires establishing new bounds that explicitly account for parallel transport and curvature. Under smoothness assumptions on the population flow-matching field and either uniform (compact manifolds) or mean-square (Hadamard manifolds) approximation guarantees for the learned field, we obtain explicit bounds of the form $\mathrm{TV}\le C_{\mathrm{Lip}}\,h + C_{\varepsilon}\,\varepsilon$ (with an additional higher-order $\varepsilon^2$ term on compact manifolds), cleanly separating numerical discretization and learning errors. Here, $h$ is the step-size and $\varepsilon$ is the target accuracy. Instantiations yield \emph{explicit} polynomial iteration complexities on the hypersphere $S^d$, and on the SPD$(n)$ manifolds under mild moment conditions.
翻译:黎曼流匹配(RFM)通过学习一个时间相关的切向量场,将其流-常微分方程从简单基分布传输到数据分布,从而将基于流的生成建模扩展到流形上的数据。我们针对在流形上使用学习向量场与欧拉离散化的RFM采样器,建立了非渐近的总变差(TV)收敛分析。我们的关键技术工具是控制两个流形ODE流之间总变差演化的微分不等式,该不等式通过向量场失配的散度和参考流的得分来表达总变差的时间导数;控制这些项需要建立明确考虑平行传输和曲率的新界。在总体流匹配场的平滑性假设以及学习场的一致(紧流形)或均方(Hadamard流形)近似保证下,我们得到了形式为 $\mathrm{TV}\le C_{\mathrm{Lip}}\,h + C_{\varepsilon}\,\varepsilon$ 的显式界(在紧流形上还有一个高阶 $\varepsilon^2$ 项),清晰分离了数值离散化和学习误差。其中,$h$ 是步长,$\varepsilon$ 是目标精度。具体实例化在超球面 $S^d$ 上以及在温和矩条件下在SPD$(n)$ 流形上得到了显式的多项式迭代复杂度。