This work addresses the lack of multimodal generative models capable of producing high-quality videos with spatially aligned audio. While recent advancements in generative models have been successful in video generation, they often overlook the spatial alignment between audio and visuals, which is essential for immersive experiences. To tackle this problem, we establish a new research direction in benchmarking Spatially Aligned Audio-Video Generation (SAVG). We propose three key components for the benchmark: dataset, baseline, and metrics. We introduce a spatially aligned audio-visual dataset, derived from an audio-visual dataset consisting of multichannel audio, video, and spatiotemporal annotations of sound events. We propose a baseline audio-visual diffusion model focused on stereo audio-visual joint learning to accommodate spatial sound. Finally, we present metrics to evaluate video and spatial audio quality, including a new spatial audio-visual alignment metric. Our experimental result demonstrates that gaps exist between the baseline model and ground truth in terms of video and audio quality, and spatial alignment between both modalities.
翻译:本研究针对当前缺乏能够生成具有空间对齐音频的高质量视频的多模态生成模型的问题。尽管生成模型在视频生成领域取得了显著进展,但这些模型往往忽视了音频与视觉之间的空间对齐,而这对沉浸式体验至关重要。为解决此问题,我们建立了一个新的研究方向,即空间对齐音视频生成(SAVG)的基准测试。我们为该基准测试提出了三个关键组成部分:数据集、基线模型和评估指标。我们引入了一个空间对齐的音视频数据集,该数据集源自一个包含多声道音频、视频以及声音事件时空标注的音视频数据集。我们提出了一种专注于立体声音视频联合学习的基线音视频扩散模型,以适应空间声音。最后,我们提出了评估视频和空间音频质量的指标,包括一个新的空间音视频对齐度量。我们的实验结果表明,基线模型在视频与音频质量以及两种模态间的空间对齐方面,与真实数据之间仍存在差距。