Predicting scene dynamics from visual observations is challenging. Existing methods capture dynamics only within observed boundaries failing to extrapolate far beyond the training sequence. Node-RF (Neural ODE-based NeRF) overcomes this limitation by integrating Neural Ordinary Differential Equations (NODEs) with dynamic Neural Radiance Fields (NeRFs), enabling a continuous-time, spatiotemporal representation that generalizes beyond observed trajectories at constant memory cost. From visual input, Node-RF learns an implicit scene state that evolves over time via an ODE solver, propagating feature embeddings via differential calculus. A NeRF-based renderer interprets calculated embeddings to synthesize arbitrary views for long-range extrapolation. Training on multiple motion sequences with shared dynamics allows for generalization to unseen conditions. Our experiments demonstrate that Node-RF can characterize abstract system behavior without explicit model to identify critical points for future predictions.
翻译:从视觉观测中预测场景动态具有挑战性。现有方法仅能捕捉观测边界内的动态,无法在训练序列范围之外进行远距离外推。Node-RF(基于神经ODE的NeRF)通过将神经常微分方程(NODE)与动态神经辐射场(NeRF)相结合,克服了这一限制,实现了连续时空表征,能够在恒定内存成本下泛化至观测轨迹之外。Node-RF从视觉输入中学习隐式场景状态,该状态通过ODE求解器随时间演化,并借助微分运算传播特征嵌入。基于NeRF的渲染器解析计算得到的嵌入,以合成用于长程外推的任意视角图像。在具有共享动态的多个运动序列上进行训练,使模型能够泛化至未见条件。我们的实验表明,Node-RF无需显式模型即可表征抽象系统行为,并能识别用于未来预测的关键点。