We introduce HyperGS, a novel framework for Hyperspectral Novel View Synthesis (HNVS), based on a new latent 3D Gaussian Splatting (3DGS) technique. Our approach enables simultaneous spatial and spectral renderings by encoding material properties from multi-view 3D hyperspectral datasets. HyperGS reconstructs high-fidelity views from arbitrary perspectives with improved accuracy and speed, outperforming currently existing methods. To address the challenges of high-dimensional data, we perform view synthesis in a learned latent space, incorporating a pixel-wise adaptive density function and a pruning technique for increased training stability and efficiency. Additionally, we introduce the first HNVS benchmark, implementing a number of new baselines based on recent SOTA RGB-NVS techniques, alongside the small number of prior works on HNVS. We demonstrate HyperGS's robustness through extensive evaluation of real and simulated hyperspectral scenes with a 14db accuracy improvement upon previously published models.
翻译:我们提出了HyperGS,一种基于新型潜在三维高斯泼溅(3DGS)技术的高光谱新视角合成(HNVS)框架。该方法通过从多视角三维高光谱数据集中编码材料属性,实现了空间与光谱的同步渲染。HyperGS能够以更高的精度和速度从任意视角重建高保真视图,其性能优于现有方法。为应对高维数据挑战,我们在学习到的潜在空间中进行视图合成,引入了像素级自适应密度函数与剪枝技术,以提升训练稳定性与效率。此外,我们构建了首个HNVS基准测试集,在整合少量已有HNVS研究的基础上,基于当前最先进的RGB-NVS技术实现了多项新基线。通过对真实与模拟高光谱场景的广泛评估,我们证明了HyperGS的鲁棒性,其精度较已发表模型提升了14分贝。