Balancing dialogue, music, and sound effects with accompanying video is crucial for immersive storytelling, yet current audio mixing workflows remain largely manual and labor-intensive. While recent advancements have introduced the visually guided acoustic highlighting task, which implicitly rebalances audio sources using multimodal guidance, it remains unclear which visual aspects are most effective as conditioning signals.We address this gap through a systematic study of whether deep video understanding improves audio remixing. Using textual descriptions as a proxy for visual analysis, we prompt large vision-language models to extract six types of visual-semantic aspects, including object and character appearance, emotion, camera focus, tone, scene background, and inferred sound-related cues. Through extensive experiments, camera focus, tone, and scene background consistently yield the largest improvements in perceptual mix quality over state-of-the-art baselines. Our findings (i) identify which visual-semantic cues most strongly support coherent and visually aligned audio remixing, and (ii) outline a practical path toward automating cinema-grade sound design using lightweight guidance derived from large vision-language models.
翻译:在沉浸式叙事中,平衡对话、音乐和音效与伴随视频的关系至关重要,然而当前的音频混音工作流程仍主要依赖手动且劳动密集。尽管近期进展引入了视觉引导声学突出处理任务,该任务利用多模态引导隐式地重新平衡音频源,但何种视觉方面作为条件信号最为有效仍不明确。我们通过系统研究深度视频理解是否能改进音频重混音来填补这一空白。使用文本描述作为视觉分析的代理,我们提示大型视觉-语言模型提取六类视觉语义方面,包括对象与角色外观、情绪、摄像机焦点、色调、场景背景以及推断的声音相关线索。通过大量实验,摄像机焦点、色调和场景背景在感知混音质量上相比最先进的基线方法持续带来最大改进。我们的发现(i)明确了哪些视觉语义线索最能支持连贯且视觉对齐的音频重混音,(ii)为利用从大型视觉-语言模型衍生的轻量级引导实现电影级声音设计的自动化指明了一条实用路径。