Visually-guided acoustic highlighting seeks to rebalance audio in alignment with the accompanying video, creating a coherent audio-visual experience. While visual saliency and enhancement have been widely studied, acoustic highlighting remains underexplored, often leading to misalignment between visual and auditory focus. Existing approaches use discriminative models, which struggle with the inherent ambiguity in audio remixing, where no natural one-to-one mapping exists between poorly-balanced and well-balanced audio mixes. To address this limitation, we reframe this task as a generative problem and introduce a Conditional Flow Matching (CFM) framework. A key challenge in iterative flow-based generation is that early prediction errors -- in selecting the correct source to enhance -- compound over steps and push trajectories off-manifold. To address this, we introduce a rollout loss that penalizes drift at the final step, encouraging self-correcting trajectories and stabilizing long-range flow integration. We further propose a conditioning module that fuses audio and visual cues before vector field regression, enabling explicit cross-modal source selection. Extensive quantitative and qualitative evaluations show that our method consistently surpasses the previous state-of-the-art discriminative approach, establishing that visually-guided audio remixing is best addressed through generative modeling.
翻译:视觉引导声学高亮旨在根据伴随视频重新平衡音频,以创建连贯的视听体验。尽管视觉显著性与增强已得到广泛研究,声学高亮领域仍探索不足,常导致视觉焦点与听觉焦点之间的错位。现有方法多采用判别式模型,这些模型难以处理音频混音中固有的模糊性——在平衡不佳与平衡良好的音频混音之间并不存在天然的一对一映射关系。为克服这一局限,我们将此任务重新定义为生成式问题,并引入了条件流匹配框架。基于迭代流的生成方法面临一个关键挑战:早期预测误差(在选取需增强的正确声源时)会在多步迭代中累积,导致轨迹偏离流形。为解决此问题,我们提出了一种滚动损失函数,该函数在最终步骤惩罚轨迹漂移,从而促进自校正轨迹并稳定长程流积分。我们进一步设计了一个条件调制模块,在向量场回归前融合音频与视觉线索,实现显式的跨模态声源选择。大量定量与定性评估表明,我们的方法持续超越先前最先进的判别式方法,证实了视觉引导音频混音问题最适合通过生成式建模路径解决。