Lensless imaging offers a significant opportunity to develop ultra-compact cameras by removing the conventional bulky lens system. However, without a focusing element, the sensor's output is no longer a direct image but a complex multiplexed scene representation. Traditional methods have attempted to address this challenge by employing learnable inversions and refinement models, but these methods are primarily designed for 2D reconstruction and do not generalize well to 3D reconstruction. We introduce GANESH, a novel framework designed to enable simultaneous refinement and novel view synthesis from multi-view lensless images. Unlike existing methods that require scene-specific training, our approach supports on-the-fly inference without retraining on each scene. Moreover, our framework allows us to tune our model to specific scenes, enhancing the rendering and refinement quality. To facilitate research in this area, we also present the first multi-view lensless dataset, LenslessScenes. Extensive experiments demonstrate that our method outperforms current approaches in reconstruction accuracy and refinement quality. Code and video results are available at https://rakesh-123-cryp.github.io/Rakesh.github.io/
翻译:无透镜成像通过移除传统笨重的透镜系统,为开发超紧凑相机提供了重要机遇。然而,由于缺少聚焦元件,传感器输出不再是直接图像,而是复杂的多路复用场景表示。传统方法尝试通过可学习的逆变换与优化模型应对这一挑战,但这些方法主要针对二维重建设计,难以推广至三维重建。我们提出了GANESH,一种新颖的框架,旨在实现多视角无透镜图像的同时优化与新视角合成。与现有需要针对特定场景训练的方法不同,我们的方法支持无需对每个场景重新训练的即时推理。此外,本框架允许针对特定场景调整模型,从而提升渲染与优化质量。为促进该领域研究,我们还推出了首个多视角无透镜数据集LenslessScenes。大量实验表明,本方法在重建精度与优化质量方面均优于现有方法。代码与视频结果可通过https://rakesh-123-cryp.github.io/Rakesh.github.io/获取。