In this paper we reexamine the process through which a Neural Radiance Field (NeRF) can be trained to produce novel LiDAR views of a scene. Unlike image applications where camera pixels integrate light over time, LiDAR pulses arrive at specific times. As such, multiple LiDAR returns are possible for any given detector and the classification of these returns is inherently probabilistic. Applying a traditional NeRF training routine can result in the network learning phantom surfaces in free space between conflicting range measurements, similar to how floater aberrations may be produced by an image model. We show that by formulating loss as an integral of probability (rather than as an integral of optical density) the network can learn multiple peaks for a given ray, allowing the sampling of first, nth, or strongest returns from a single output channel. Code is available at https://github.com/mcdermatt/PLINK
翻译:本文重新审视了训练神经辐射场(NeRF)以生成场景新颖LiDAR视图的过程。与相机像素在时间上积分光线的图像应用不同,LiDAR脉冲在特定时刻到达。因此,对于任意给定探测器可能存在多个LiDAR回波,且这些回波的分类本质上是概率性的。应用传统的NeRF训练流程可能导致网络在冲突距离测量之间的自由空间中学习到幻影表面,类似于图像模型可能产生的悬浮伪影。我们证明,通过将损失函数构建为概率积分(而非光学密度积分),网络能够学习给定射线的多个峰值,从而允许从单一输出通道采样第一、第n或最强回波。代码发布于https://github.com/mcdermatt/PLINK