Audio-visual semantic segmentation (AVSS) aims to segment and classify sounding objects in videos with acoustic cues. However, most approaches operate on the close-set assumption and only identify pre-defined categories from training data, lacking the generalization ability to detect novel categories in practical applications. In this paper, we introduce a new task: open-vocabulary audio-visual semantic segmentation, extending AVSS task to open-world scenarios beyond the annotated label space. This is a more challenging task that requires recognizing all categories, even those that have never been seen nor heard during training. Moreover, we propose the first open-vocabulary AVSS framework, OV-AVSS, which mainly consists of two parts: 1) a universal sound source localization module to perform audio-visual fusion and locate all potential sounding objects and 2) an open-vocabulary classification module to predict categories with the help of the prior knowledge from large-scale pre-trained vision-language models. To properly evaluate the open-vocabulary AVSS, we split zero-shot training and testing subsets based on the AVSBench-semantic benchmark, namely AVSBench-OV. Extensive experiments demonstrate the strong segmentation and zero-shot generalization ability of our model on all categories. On the AVSBench-OV dataset, OV-AVSS achieves 55.43% mIoU on base categories and 29.14% mIoU on novel categories, exceeding the state-of-the-art zero-shot method by 41.88%/20.61% and open-vocabulary method by 10.2%/11.6%. The code is available at https://github.com/ruohaoguo/ovavss.
翻译:视听语义分割(AVSS)旨在利用声音线索对视频中的发声物体进行分割和分类。然而,现有方法大多基于封闭集假设,只能识别训练数据中预定义的类别,缺乏在实际应用中检测新类别的泛化能力。本文提出一个新任务:开放词汇视听语义分割,将AVSS任务扩展到标注标签空间之外的开放世界场景。这是一个更具挑战性的任务,要求识别所有类别,包括那些在训练过程中从未见过或听过的类别。此外,我们提出了首个开放词汇AVSS框架OV-AVSS,其主要由两部分组成:1)一个通用声源定位模块,用于执行视听融合并定位所有潜在的发音物体;2)一个开放词汇分类模块,借助大规模预训练视觉-语言模型的先验知识来预测类别。为了合理评估开放词汇AVSS,我们基于AVSBench-semantic基准划分了零样本训练和测试子集,命名为AVSBench-OV。大量实验证明了我们的模型在所有类别上都具有强大的分割和零样本泛化能力。在AVSBench-OV数据集上,OV-AVSS在基础类别上达到了55.43%的mIoU,在新类别上达到了29.14%的mIoU,分别超过了最先进的零样本方法41.88%/20.61%,以及开放词汇方法10.2%/11.6%。代码可在 https://github.com/ruohaoguo/ovavss 获取。