In this paper, we explore the overlooked challenge of stability and temporal consistency in interactive video generation, which synthesizes dynamic and controllable video worlds through interactive behaviors such as camera movements and text prompts. Despite remarkable progress in world modeling, current methods still suffer from severe instability and temporal degradation, often leading to spatial drift and scene collapse during long-horizon interactions. To better understand this issue, we initially investigate the underlying causes of instability and identify that the major source of error accumulation originates from the same scene, where generated frames gradually deviate from the initial clean state and propagate errors to subsequent frames. Building upon this observation, we propose a simple yet effective method, \textbf{StableWorld}, a Dynamic Frame Eviction Mechanism. By continuously filtering out degraded frames while retaining geometrically consistent ones, StableWorld effectively prevents cumulative drift at its source, leading to more stable and temporal consistency of interactive generation. Promising results on multiple interactive video models, \eg, Matrix-Game, Open-Oasis, and Hunyuan-GameCraft, demonstrate that StableWorld is model-agnostic and can be applied to different interactive video generation frameworks to substantially improve stability, temporal consistency, and generalization across diverse interactive scenarios.
翻译:本文探讨了交互式视频生成中一个被忽视的挑战——稳定性与时间一致性。该任务旨在通过相机移动和文本提示等交互行为,合成动态且可控的视频世界。尽管世界建模领域已取得显著进展,但现有方法仍存在严重的稳定性不足和时间退化问题,常常在长程交互过程中导致空间漂移和场景崩溃。为了更好地理解这一问题,我们首先探究了不稳定的根本原因,并发现误差累积的主要来源在于同一场景内:生成的帧会逐渐偏离初始的干净状态,并将误差传播至后续帧。基于这一观察,我们提出了一种简单而有效的方法——\textbf{StableWorld},即动态帧剔除机制。该方法通过持续过滤掉已退化的帧,同时保留几何一致的帧,从而从源头上有效防止累积漂移,实现更稳定、时间更一致的交互式生成。在多个交互视频模型(例如 Matrix-Game、Open-Oasis 和 Hunyuan-GameCraft)上的实验结果表明,StableWorld 具有模型无关性,可应用于不同的交互视频生成框架,显著提升其在多样化交互场景下的稳定性、时间一致性和泛化能力。