Novel view acoustic synthesis (NVAS) aims to render binaural audio at any target viewpoint, given a mono audio emitted by a sound source at a 3D scene. Existing methods have proposed NeRF-based implicit models to exploit visual cues as a condition for synthesizing binaural audio. However, in addition to low efficiency originating from heavy NeRF rendering, these methods all have a limited ability of characterizing the entire scene environment such as room geometry, material properties, and the spatial relation between the listener and sound source. To address these issues, we propose a novel Audio-Visual Gaussian Splatting (AV-GS) model. To obtain a material-aware and geometry-aware condition for audio synthesis, we learn an explicit point-based scene representation with an audio-guidance parameter on locally initialized Gaussian points, taking into account the space relation from the listener and sound source. To make the visual scene model audio adaptive, we propose a point densification and pruning strategy to optimally distribute the Gaussian points, with the per-point contribution in sound propagation (e.g., more points needed for texture-less wall surfaces as they affect sound path diversion). Extensive experiments validate the superiority of our AV-GS over existing alternatives on the real-world RWAS and simulation-based SoundSpaces datasets.
翻译:新视角声学合成(NVAS)旨在给定三维场景中声源发出的单声道音频,渲染任意目标视点的双耳音频。现有方法提出了基于NeRF的隐式模型,以视觉线索作为条件来合成双耳音频。然而,除了因繁重的NeRF渲染导致的低效率外,这些方法在表征整个场景环境(如房间几何结构、材质属性以及听者与声源之间的空间关系)方面均存在局限。为解决这些问题,我们提出了一种新颖的视听高斯泼溅(AV-GS)模型。为获得适用于音频合成的材质感知与几何感知条件,我们学习了一种显式的基于点的场景表示,在局部初始化的高斯点上引入音频引导参数,并考虑了听者与声源的空间关系。为使视觉场景模型适应音频特性,我们提出了一种点云致密化与剪枝策略,以优化高斯点的分布,其依据是各点在声音传播中的贡献(例如,无纹理的墙面会影响声路偏转,因此需要分布更多点)。大量实验验证了我们的AV-GS模型在真实世界RWAS数据集与基于模拟的SoundSpaces数据集上优于现有替代方案。