Novel view synthesis is an important problem with many applications, including AR/VR, gaming, and simulations for robotics. With the recent rapid development of Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS) methods, it is becoming difficult to keep track of the current state of the art (SoTA) due to methods using different evaluation protocols, codebases being difficult to install and use, and methods not generalizing well to novel 3D scenes. Our experiments support this claim by showing that tiny differences in evaluation protocols of various methods can lead to inconsistent reported metrics. To address these issues, we propose a framework called NerfBaselines, which simplifies the installation of various methods, provides consistent benchmarking tools, and ensures reproducibility. We validate our implementation experimentally by reproducing numbers reported in the original papers. To further improve the accessibility, we release a web platform where commonly used methods are compared on standard benchmarks. Web: https://jkulhanek.com/nerfbaselines
翻译:新颖视角合成是一个具有广泛应用的重要问题,涵盖增强现实/虚拟现实(AR/VR)、游戏以及机器人仿真等领域。随着神经辐射场(NeRF)与三维高斯泼溅(3DGS)方法的快速发展,由于不同方法采用各异的评估协议、代码库安装使用困难以及方法对新三维场景泛化能力不足,追踪当前最优技术水平(SoTA)变得日益困难。我们的实验通过展示不同方法评估协议中的细微差异会导致报告指标不一致,从而支持了这一观点。为解决这些问题,我们提出了名为NerfBaselines的框架,该框架简化了多种方法的安装流程,提供一致的基准测试工具,并确保结果可复现。我们通过复现原始论文报告的数据,对实现进行了实验验证。为进一步提升可访问性,我们发布了网络平台,在标准基准上对常用方法进行比较。网址:https://jkulhanek.com/nerfbaselines