We propose DistillNeRF, a self-supervised learning framework addressing the challenge of understanding 3D environments from limited 2D observations in autonomous driving. Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs, and is trained self-supervised with differentiable rendering to reconstruct RGB, depth, or feature images. Our first insight is to exploit per-scene optimized Neural Radiance Fields (NeRFs) by generating dense depth and virtual camera targets for training, thereby helping our model to learn 3D geometry from sparse non-overlapping image inputs. Second, to learn a semantically rich 3D representation, we propose distilling features from pre-trained 2D foundation models, such as CLIP or DINOv2, thereby enabling various downstream tasks without the need for costly 3D human annotations. To leverage these two insights, we introduce a novel model architecture with a two-stage lift-splat-shoot encoder and a parameterized sparse hierarchical voxel representation. Experimental results on the NuScenes dataset demonstrate that DistillNeRF significantly outperforms existing comparable self-supervised methods for scene reconstruction, novel view synthesis, and depth estimation; and it allows for competitive zero-shot 3D semantic occupancy prediction, as well as open-world scene understanding through distilled foundation model features. Demos and code will be available at https://distillnerf.github.io/.
翻译:我们提出DistillNeRF,一种自监督学习框架,旨在解决自动驾驶中从有限二维观测理解三维环境的挑战。该方法是一种可泛化的前馈模型,能够从稀疏的单帧多视角相机输入预测丰富的神经场景表示,并通过可微分渲染进行自监督训练以重建RGB、深度或特征图像。我们的第一个核心思路是利用逐场景优化的神经辐射场(NeRFs)生成密集深度和虚拟相机目标进行训练,从而帮助模型从稀疏非重叠图像输入中学习三维几何结构。其次,为学习语义丰富的三维表示,我们提出从预训练的二维基础模型(如CLIP或DINOv2)中蒸馏特征,从而无需昂贵的三维人工标注即可支持多种下游任务。为融合这两项思路,我们设计了一种新颖的模型架构,包含两阶段“提升-展开-投射”编码器和参数化稀疏分层体素表示。在NuScenes数据集上的实验结果表明,DistillNeRF在场景重建、新视角合成和深度估计任务上显著优于现有同类自监督方法;同时通过蒸馏的基础模型特征,能够实现具有竞争力的零样本三维语义占据预测以及开放世界场景理解。演示与代码将在https://distillnerf.github.io/发布。