Recent advancements in 4D generation have demonstrated its remarkable capability in synthesizing photorealistic renderings of dynamic 3D scenes. However, despite achieving impressive visual performance, almost all existing methods overlook the generation of spatial audio aligned with the corresponding 4D scenes, posing a significant limitation to truly immersive audiovisual experiences. To mitigate this issue, we propose Sonic4D, a novel framework that enables spatial audio generation for immersive exploration of 4D scenes. Specifically, our method is composed of three stages: 1) To capture both the dynamic visual content and raw auditory information from a monocular video, we first employ pre-trained expert models to generate the 4D scene and its corresponding monaural audio. 2) Subsequently, to transform the monaural audio into spatial audio, we localize and track the sound sources within the 4D scene, where their 3D spatial coordinates at different timestamps are estimated via a pixel-level visual grounding strategy. 3) Based on the estimated sound source locations, we further synthesize plausible spatial audio that varies across different viewpoints and timestamps using physics-based simulation. Extensive experiments have demonstrated that our proposed method generates realistic spatial audio consistent with the synthesized 4D scene in a training-free manner, significantly enhancing the immersive experience for users. Generated audio and video examples are available at https://x-drunker.github.io/Sonic4D-project-page.
翻译:近期4D生成技术的进展已展现出其在合成动态3D场景的光照真实感渲染方面的卓越能力。然而,尽管在视觉表现上取得了令人印象深刻的成果,现有方法几乎都忽视了生成与对应4D场景对齐的空间音频,这对实现真正沉浸式的视听体验构成了显著限制。为缓解此问题,我们提出了Sonic4D——一个能够为4D场景沉浸式探索生成空间音频的新型框架。具体而言,我们的方法包含三个阶段:1)为从单目视频中捕获动态视觉内容与原始听觉信息,我们首先采用预训练的专家模型生成4D场景及其对应的单声道音频。2)随后,为将单声道音频转换为空间音频,我们在4D场景中对声源进行定位与跟踪,通过像素级视觉定位策略估算其在不同时间戳下的三维空间坐标。3)基于估算的声源位置,我们进一步利用基于物理的仿真技术合成随不同视点和时间戳变化的合理空间音频。大量实验表明,我们所提出的方法能够以无需训练的方式生成与合成4D场景一致的逼真空间音频,显著提升了用户的沉浸式体验。生成的音频与视频示例可在 https://x-drunker.github.io/Sonic4D-project-page 获取。