Gaussian splatting has emerged as a powerful tool for high-fidelity reconstruction of dynamic scenes. However, existing methods primarily rely on implicit motion representations, such as encoding motions into neural networks or per-Gaussian parameters, which makes it difficult to further manipulate the reconstructed motions. This lack of explicit controllability limits existing methods to replaying recorded motions only, which hinders a wider application. To address this, we propose Motion Blender Gaussian Splatting (MB-GS), a novel framework that uses motion graph as an explicit and sparse motion representation. The motion of graph links is propagated to individual Gaussians via dual quaternion skinning, with learnable weight painting functions determining the influence of each link. The motion graphs and 3D Gaussians are jointly optimized from input videos via differentiable rendering. Experiments show that MB-GS achieves state-of-the-art performance on the iPhone dataset while being competitive on HyperNeRF. Additionally, we demonstrate the application potential of our method in generating novel object motions and robot demonstrations through motion editing. Video demonstrations can be found at https://mlzxy.github.io/mbgs.
翻译:高斯溅射已成为动态场景高保真重建的强大工具。然而,现有方法主要依赖于隐式运动表示,例如将运动编码至神经网络或逐高斯参数中,这使得进一步操控重建运动变得困难。这种显式可控性的缺乏限制了现有方法仅能回放已记录的运动,从而阻碍了更广泛的应用。为解决此问题,我们提出运动混合高斯溅射(MB-GS),一种利用运动图作为显式稀疏运动表示的新颖框架。图链接的运动通过双四元数蒙皮传播至各个高斯单元,其中可学习的权重绘制函数决定了每个链接的影响程度。运动图与三维高斯单元通过可微分渲染从输入视频中联合优化。实验表明,MB-GS在iPhone数据集上实现了最先进的性能,同时在HyperNeRF数据集上保持竞争力。此外,我们通过运动编辑展示了本方法在生成新物体运动与机器人演示中的应用潜力。视频演示可见于 https://mlzxy.github.io/mbgs。