Scaling video generation from seconds to minutes faces a critical bottleneck: while short-video data is abundant and high-fidelity, coherent long-form data is scarce and limited to narrow domains. To address this, we propose a training paradigm where Mode Seeking meets Mean Seeking, decoupling local fidelity from long-term coherence based on a unified representation via a Decoupled Diffusion Transformer. Our approach utilizes a global Flow Matching head trained via supervised learning on long videos to capture narrative structure, while simultaneously employing a local Distribution Matching head that aligns sliding windows to a frozen short-video teacher via a mode-seeking reverse-KL divergence. This strategy enables the synthesis of minute-scale videos that learns long-range coherence and motions from limited long videos via supervised flow matching, while inheriting local realism by aligning every sliding-window segment of the student to a frozen short-video teacher, resulting in a few-step fast long video generator. Evaluations show that our method effectively closes the fidelity-horizon gap by jointly improving local sharpness, motion and long-range consistency. Project website: https://primecai.github.io/mmm/.
翻译:将视频生成从秒级扩展至分钟级面临一个关键瓶颈:尽管短时视频数据丰富且保真度高,但连贯的长时视频数据稀缺且局限于狭窄领域。为解决此问题,我们提出一种"模态寻求与均值相遇"的训练范式,通过基于解耦扩散Transformer的统一表征,将局部保真度与长期连贯性进行解耦。该方法采用经长视频监督训练的全域流匹配头以捕捉叙事结构,同时利用局部分布匹配头,通过模态寻求的反向KL散度将滑动窗口与冻结的短视频教师模型对齐。该策略能够合成分钟级视频:通过监督流匹配从有限的长视频中学习长程连贯性与运动模式,同时通过将学生模型的每个滑动窗口片段与冻结的短视频教师模型对齐来继承局部真实性,最终实现少步快速长视频生成。评估结果表明,我们的方法通过联合提升局部清晰度、运动表现及长程一致性,有效弥合了保真度-时域跨度之间的鸿沟。项目网站:https://primecai.github.io/mmm/。