We propose the problem of point-level 3D scene interpolation, which aims to simultaneously reconstruct a 3D scene in two states from multiple views, synthesize smooth point-level interpolations between them, and render the scene from novel viewpoints, all without any supervision between the states. The primary challenge is on achieving a smooth transition between states that may involve significant and non-rigid changes. To address these challenges, we introduce "PAPR in Motion", a novel approach that builds upon the recent Proximity Attention Point Rendering (PAPR) technique, which can deform a point cloud to match a significantly different shape and render a visually coherent scene even after non-rigid deformations. Our approach is specifically designed to maintain the temporal consistency of the geometric structure by introducing various regularization techniques for PAPR. The result is a method that can effectively bridge large scene changes and produce visually coherent and temporally smooth interpolations in both geometry and appearance. Evaluation across diverse motion types demonstrates that "PAPR in Motion" outperforms the leading neural renderer for dynamic scenes. For more results and code, please visit our project website at https://niopeng.github.io/PAPR-in-Motion/ .
翻译:我们提出了点级三维场景插值问题,旨在从多视角图像中同时重建两种状态下的三维场景,合成两者之间的平滑点级插值,并从新视角渲染场景,且无需状态之间的任何监督。主要挑战在于实现可能涉及显著非刚性变化的状态之间的平滑过渡。为解决这些挑战,我们引入了"PAPR in Motion",一种基于近期提出的邻近注意力点渲染(PAPR)技术的新方法,该方法能够在非刚性变形后仍可变形点云以匹配显著不同的形状,并渲染视觉一致的场景。我们的方法通过引入针对PAPR的多种正则化技术,专门设计以保持几何结构的时间一致性。最终得到的是一种能够有效桥接大范围场景变化,并在几何与外观上生成视觉一致且时间平滑插值的方法。跨多种运动类型的评估表明,"PAPR in Motion"在动态场景渲染上优于领先的神经渲染器。更多结果与代码,请访问项目网站:https://niopeng.github.io/PAPR-in-Motion/ 。