This paper aims to address the challenge of reconstructing long volumetric videos from multi-view RGB videos. Recent dynamic view synthesis methods leverage powerful 4D representations, like feature grids or point cloud sequences, to achieve high-quality rendering results. However, they are typically limited to short (1~2s) video clips and often suffer from large memory footprints when dealing with longer videos. To solve this issue, we propose a novel 4D representation, named Temporal Gaussian Hierarchy, to compactly model long volumetric videos. Our key observation is that there are generally various degrees of temporal redundancy in dynamic scenes, which consist of areas changing at different speeds. Motivated by this, our approach builds a multi-level hierarchy of 4D Gaussian primitives, where each level separately describes scene regions with different degrees of content change, and adaptively shares Gaussian primitives to represent unchanged scene content over different temporal segments, thus effectively reducing the number of Gaussian primitives. In addition, the tree-like structure of the Gaussian hierarchy allows us to efficiently represent the scene at a particular moment with a subset of Gaussian primitives, leading to nearly constant GPU memory usage during the training or rendering regardless of the video length. Extensive experimental results demonstrate the superiority of our method over alternative methods in terms of training cost, rendering speed, and storage usage. To our knowledge, this work is the first approach capable of efficiently handling minutes of volumetric video data while maintaining state-of-the-art rendering quality. Our project page is available at: https://zju3dv.github.io/longvolcap.
翻译:本文旨在解决从多视角RGB视频重建长体积视频的挑战。现有的动态视图合成方法利用强大的4D表示(如特征网格或点云序列)实现高质量渲染效果,但通常仅适用于短时(1~2秒)视频片段,且在处理更长视频时往往面临巨大的内存开销。为解决这一问题,我们提出了一种名为时序高斯层次结构的新型4D表示方法,用于紧凑建模长体积视频。我们的核心观察是:动态场景普遍存在不同程度的时间冗余性,其中包含以不同速率变化的区域。基于此,我们构建了多层次的4D高斯基元层次结构,每一层级分别描述具有不同内容变化程度的场景区域,并自适应地共享高斯基元以表示不同时间段内未变化的场景内容,从而有效减少高斯基元数量。此外,高斯层次结构的树状特性使我们能够通过高斯基元的子集高效表示特定时刻的场景,使得训练或渲染过程中的GPU内存使用量几乎不随视频长度变化。大量实验结果表明,本方法在训练成本、渲染速度和存储效率方面均优于现有方法。据我们所知,这是首个能够高效处理分钟级体积视频数据并保持顶尖渲染质量的方法。项目页面详见:https://zju3dv.github.io/longvolcap。