Diffusion and flow models have become the dominant paradigm for generative modeling on Riemannian manifolds, with successful applications in protein backbone generation and DNA sequence design. However, these methods require tens to hundreds of neural network evaluations at inference time, which can become a computational bottleneck in large-scale scientific sampling workflows. We introduce Riemannian MeanFlow~(RMF), a framework for learning flow maps directly on manifolds, enabling high-quality generations with as few as one forward pass. We derive three equivalent characterizations of the manifold average velocity (Eulerian, Lagrangian, and semigroup identities), and analyze parameterizations and stabilization techniques to improve training on high-dimensional manifolds. In promoter DNA design and protein backbone generation settings, RMF achieves comparable sample quality to prior methods while requiring up to 10$\times$ fewer function evaluations. Finally, we show that few-step flow maps enable efficient reward-guided design through reward look-ahead, where terminal states can be predicted from intermediate steps at minimal additional cost.
翻译:扩散模型与流模型已成为黎曼流形上生成建模的主导范式,在蛋白质骨架生成和DNA序列设计等领域取得了成功应用。然而,这些方法在推理时需要数十至数百次神经网络评估,在大规模科学采样工作流程中可能成为计算瓶颈。本文提出黎曼均值流(RMF),一种直接在流形上学习流映射的框架,能够仅通过单次前向传播实现高质量生成。我们推导了流形平均速度的三种等价表征(欧拉描述、拉格朗日描述和半群恒等式),并分析了参数化方法与稳定化技术以改进高维流形上的训练。在启动子DNA设计和蛋白质骨架生成任务中,RMF在达到与现有方法相当样本质量的同时,所需函数评估次数减少高达10倍。最后,我们证明少步流映射可通过奖励前瞻机制实现高效的奖励引导设计——仅需极小额外计算成本即可从中间步骤预测终端状态。