We propose DistillNeRF, a self-supervised learning framework addressing the challenge of understanding 3D environments from limited 2D observations in outdoor autonomous driving scenes. Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs with limited view overlap, and is trained self-supervised with differentiable rendering to reconstruct RGB, depth, or feature images. Our first insight is to exploit per-scene optimized Neural Radiance Fields (NeRFs) by generating dense depth and virtual camera targets from them, which helps our model to learn enhanced 3D geometry from sparse non-overlapping image inputs. Second, to learn a semantically rich 3D representation, we propose distilling features from pre-trained 2D foundation models, such as CLIP or DINOv2, thereby enabling various downstream tasks without the need for costly 3D human annotations. To leverage these two insights, we introduce a novel model architecture with a two-stage lift-splat-shoot encoder and a parameterized sparse hierarchical voxel representation. Experimental results on the NuScenes and Waymo NOTR datasets demonstrate that DistillNeRF significantly outperforms existing comparable state-of-the-art self-supervised methods for scene reconstruction, novel view synthesis, and depth estimation; and it allows for competitive zero-shot 3D semantic occupancy prediction, as well as open-world scene understanding through distilled foundation model features. Demos and code will be available at https://distillnerf.github.io/.
翻译:我们提出DistillNeRF,一种自监督学习框架,旨在解决户外自动驾驶场景中从有限二维观测理解三维环境的挑战。我们的方法是一个可泛化的前馈模型,能够从稀疏、单帧且视角重叠有限的多视图相机输入中预测丰富的神经场景表示,并通过可微分渲染进行自监督训练以重建RGB、深度或特征图像。我们的第一个核心思路是利用逐场景优化的神经辐射场(NeRFs),从中生成密集深度和虚拟相机目标,这有助于我们的模型从稀疏非重叠图像输入中学习增强的三维几何信息。其次,为学习语义丰富的三维表示,我们提出从预训练的二维基础模型(如CLIP或DINOv2)中蒸馏特征,从而无需昂贵的三维人工标注即可支持多种下游任务。为融合这两项思路,我们设计了一种新颖的模型架构,包含两阶段升维-展开-投射编码器与参数化稀疏分层体素表示。在NuScenes和Waymo NOTR数据集上的实验结果表明,DistillNeRF在场景重建、新视角合成和深度估计任务上显著优于现有同类自监督方法;同时,通过蒸馏的基础模型特征,它能实现具有竞争力的零样本三维语义占据预测以及开放世界场景理解。演示与代码将在https://distillnerf.github.io/发布。