Predictive world models that simulate future observations under explicit camera control are fundamental to interactive AI. Despite rapid advances, current systems lack spatial persistence: they fail to maintain stable scene structures over long trajectories, frequently hallucinating details when cameras revisit previously observed locations. We identify that this geometric drift stems from reliance on screen-space positional embeddings, which conflict with the projective geometry required for 3D consistency. We introduce \textbf{ViewRope}, a geometry-aware encoding that injects camera-ray directions directly into video transformer self-attention layers. By parameterizing attention with relative ray geometry rather than pixel locality, ViewRope provides a model-native inductive bias for retrieving 3D-consistent content across temporal gaps. We further propose \textbf{Geometry-Aware Frame-Sparse Attention}, which exploits these geometric cues to selectively attend to relevant historical frames, improving efficiency without sacrificing memory consistency. We also present \textbf{ViewBench}, a diagnostic suite measuring loop-closure fidelity and geometric drift. Our results demonstrate that ViewRope substantially improves long-term consistency while reducing computational costs.
翻译:在显式相机控制下模拟未来观测的预测世界模型是交互式人工智能的基础。尽管进展迅速,当前系统仍缺乏空间持久性:它们在长轨迹上无法维持稳定的场景结构,当相机重新访问先前观测过的位置时,经常产生细节幻觉。我们发现这种几何漂移源于对屏幕空间位置嵌入的依赖,这与三维一致性所需的投影几何相冲突。我们引入了 \textbf{ViewRope},一种几何感知编码,它将相机射线方向直接注入视频 Transformer 的自注意力层。通过使用相对射线几何而非像素局部性来参数化注意力,ViewRope 为跨时间间隙检索三维一致内容提供了模型固有的归纳偏置。我们进一步提出了 \textbf{几何感知帧稀疏注意力},它利用这些几何线索选择性地关注相关的历史帧,在不牺牲记忆一致性的前提下提高了效率。我们还提出了 \textbf{ViewBench},一个用于测量闭环保真度和几何漂移的诊断套件。我们的结果表明,ViewRope 显著改善了长期一致性,同时降低了计算成本。