In recent years, novel view synthesis has gained popularity in generating high-fidelity images. While demonstrating superior performance in the task of synthesizing novel views, the majority of these methods are still based on the conventional multi-layer perceptron for scene embedding. Furthermore, light field models suffer from geometric blurring during pixel rendering, while radiance field-based volume rendering methods have multiple solutions for a certain target of density distribution integration. To address these issues, we introduce the Convolutional Neural Radiance Fields to model the derivatives of radiance along rays. Based on 1D convolutional operations, our proposed method effectively extracts potential ray representations through a structured neural network architecture. Besides, with the proposed ray modeling, a proposed recurrent module is employed to solve geometric ambiguity in the fully neural rendering process. Extensive experiments demonstrate the promising results of our proposed model compared with existing state-of-the-art methods.
翻译:近年来,新视角合成技术在生成高保真图像方面日益受到关注。尽管在合成新视角任务中展现出卓越性能,现有方法大多仍基于传统的多层感知机进行场景嵌入。此外,光场模型在像素渲染过程中存在几何模糊问题,而基于辐射场的体渲染方法对特定密度分布积分目标存在多解性。为解决这些问题,我们提出卷积神经辐射场来建模沿射线方向的辐射导数。基于一维卷积运算,本方法通过结构化神经网络架构有效提取潜在的射线表征。此外,借助所提出的射线建模方法,我们采用循环模块来解决全神经渲染过程中的几何歧义问题。大量实验表明,与现有先进方法相比,我们提出的模型取得了显著成果。