Capturing the temporal evolution of Gaussian properties such as position, rotation, and scale is a challenging task due to the vast number of time-varying parameters and the limited photometric data available, which generally results in convergence issues, making it difficult to find an optimal solution. While feeding all inputs into an end-to-end neural network can effectively model complex temporal dynamics, this approach lacks explicit supervision and struggles to generate high-quality transformation fields. On the other hand, using time-conditioned polynomial functions to model Gaussian trajectories and orientations provides a more explicit and interpretable solution, but requires significant handcrafted effort and lacks generalizability across diverse scenes. To overcome these limitations, this paper introduces a novel approach based on a learnable infinite Taylor Formula to model the temporal evolution of Gaussians. This method offers both the flexibility of an implicit network-based approach and the interpretability of explicit polynomial functions, allowing for more robust and generalizable modeling of Gaussian dynamics across various dynamic scenes. Extensive experiments on dynamic novel view rendering tasks are conducted on public datasets, demonstrating that the proposed method achieves state-of-the-art performance in this domain. More information is available on our project page(https://ellisonking.github.io/TaylorGaussian).
翻译:捕捉高斯属性(如位置、旋转和尺度)随时间演变的动态过程是一项具有挑战性的任务,这主要源于两方面原因:一是随时间变化的参数量极为庞大,二是可用的光度数据有限。这些因素通常会导致收敛问题,使得难以寻得最优解。虽然将所有输入馈送至端到端神经网络能够有效建模复杂的时序动态,但该方法缺乏显式监督,难以生成高质量的变换场。另一方面,采用时间条件多项式函数来建模高斯轨迹与方向,虽能提供更显式且可解释的解决方案,却需要大量人工设计工作,且在不同场景间缺乏泛化能力。为克服这些局限,本文提出一种基于可学习无限泰勒公式的新方法,用于建模高斯属性的时序演变。该方法兼具基于隐式神经网络的灵活性与显式多项式函数的可解释性,从而能够对各种动态场景中的高斯动态进行更鲁棒且可泛化的建模。我们在公开数据集上对动态新视图渲染任务进行了大量实验,结果表明所提方法在该领域达到了最先进的性能。更多信息请访问我们的项目页面(https://ellisonking.github.io/TaylorGaussian)。