This work addresses the lack of multimodal generative models capable of producing high-quality videos with spatially aligned audio. While recent advancements in generative models have been successful in video generation, they often overlook the spatial alignment between audio and visuals, which is essential for immersive experiences. To tackle this problem, we establish a new research direction in benchmarking the Spatially Aligned Audio-Video Generation (SAVG) task. We introduce a spatially aligned audio-visual dataset, whose audio and video data are curated based on whether sound events are onscreen or not. We also propose a new alignment metric that aims to evaluate the spatial alignment between audio and video. Then, using the dataset and metric, we benchmark two types of baseline methods: one is based on a joint audio-video generation model, and the other is a two-stage method that combines a video generation model and a video-to-audio generation model. Our experimental results demonstrate that gaps exist between the baseline methods and the ground truth in terms of video and audio quality, as well as spatial alignment between the two modalities.
翻译:本研究针对当前缺乏能够生成高质量且空间对齐音视频的多模态生成模型的问题。尽管生成模型在视频生成领域取得了显著进展,但这些模型往往忽视了音频与视觉之间的空间对齐关系,而这种对齐对于沉浸式体验至关重要。为解决此问题,我们开创了一个新的研究方向:对空间对齐音视频生成任务进行基准测试。我们引入了一个空间对齐的音视频数据集,其音频与视频数据根据声音事件是否出现在画面内进行筛选。同时,我们提出了一种新的对齐度量指标,旨在评估音频与视频之间的空间对齐程度。基于该数据集与度量指标,我们对两类基线方法进行了基准测试:一类基于联合音视频生成模型,另一类为结合视频生成模型与视频到音频生成模型的两阶段方法。实验结果表明,基线方法在视频与音频质量以及跨模态空间对齐方面,与真实数据之间仍存在差距。