Predictive world models that simulate future observations under explicit camera control are fundamental to interactive AI. Despite rapid advances, current systems lack spatial persistence: they fail to maintain stable scene structures over long trajectories, frequently hallucinating details when cameras revisit previously observed locations. We identify that this geometric drift stems from reliance on screen-space positional embeddings, which conflict with the projective geometry required for 3D consistency. We introduce \textbf{ViewRope}, a geometry-aware encoding that injects camera-ray directions directly into video transformer self-attention layers. By parameterizing attention with relative ray geometry rather than pixel locality, ViewRope provides a model-native inductive bias for retrieving 3D-consistent content across temporal gaps. We further propose \textbf{Geometry-Aware Frame-Sparse Attention}, which exploits these geometric cues to selectively attend to relevant historical frames, improving efficiency without sacrificing memory consistency. We also present \textbf{ViewBench}, a diagnostic suite measuring loop-closure fidelity and geometric drift. Our results demonstrate that ViewRope substantially improves long-term consistency while reducing computational costs.
翻译:在显式相机控制下模拟未来观测的预测世界模型是交互式人工智能的基础。尽管进展迅速,当前系统缺乏空间持久性:它们无法在长轨迹上保持稳定的场景结构,当相机重新访问先前观测位置时经常产生细节幻觉。我们发现这种几何漂移源于对屏幕空间位置嵌入的依赖,这与三维一致性所需的投影几何相冲突。我们引入了 **ViewRope**,一种几何感知编码,将相机射线方向直接注入视频Transformer自注意力层。通过用相对射线几何而非像素局部性参数化注意力,ViewRope为跨时间间隙检索三维一致内容提供了模型固有的归纳偏置。我们进一步提出 **Geometry-Aware Frame-Sparse Attention**,该方法利用这些几何线索选择性地关注相关历史帧,在保持记忆一致性的同时提升效率。我们还提出了 **ViewBench**,一个用于测量闭环保真度与几何漂移的诊断测试集。实验结果表明,ViewRope在显著提升长期一致性的同时有效降低了计算成本。