Event cameras offer a considerable alternative to RGB cameras in many scenarios. While there are recent works on event-based novel-view synthesis, dense 3D mesh reconstruction remains scarcely explored and existing event-based techniques are severely limited in their 3D reconstruction accuracy. To address this limitation, we present EventNeuS, a self-supervised neural model for learning 3D representations from monocular colour event streams. Our approach, for the first time, combines 3D signed distance function and density field learning with event-based supervision. Furthermore, we introduce spherical harmonics encodings into our model for enhanced handling of view-dependent effects. EventNeuS outperforms existing approaches by a significant margin, achieving 34% lower Chamfer distance and 31% lower mean absolute error on average compared to the best previous method.
翻译:事件相机在许多场景中为RGB相机提供了重要的替代方案。尽管近期已有基于事件的新视角合成研究,但稠密三维网格重建领域仍鲜有探索,且现有基于事件的三维重建技术在精度上存在严重局限。为突破此限制,本文提出EventNeuS——一种从单目彩色事件流中学习三维表征的自监督神经模型。该方法首次将三维有符号距离函数与密度场学习相结合,并引入基于事件的监督机制。此外,我们在模型中引入球谐函数编码以增强对视角依赖效应的处理能力。EventNeuS以显著优势超越现有方法,与先前最佳方法相比,平均倒角距离降低34%,平均绝对误差降低31%。