In recent years, numerous tasks have been proposed to encourage model to develop specified capability in understanding audio-visual scene, primarily categorized into temporal localization, spatial localization, spatio-temporal reasoning, and pixel-level understanding. Instead, human possesses a unified understanding ability for diversified tasks. Therefore, designing an audio-visual model with general capability to unify these tasks is of great value. However, simply joint training for all tasks can lead to interference due to the heterogeneity of audiovisual data and complex relationship among tasks. We argue that this problem can be solved through explicit cooperation among tasks. To achieve this goal, we propose a unified learning method which achieves explicit inter-task cooperation from both the perspectives of data and model thoroughly. Specifically, considering the labels of existing datasets are simple words, we carefully refine these datasets and construct an Audio-Visual Unified Instruction-tuning dataset with Explicit reasoning process (AV-UIE), which clarifies the cooperative relationship among tasks. Subsequently, to facilitate concrete cooperation in learning stage, an interaction-aware LoRA structure with multiple LoRA heads is designed to learn different aspects of audiovisual data interaction. By unifying the explicit cooperation across the data and model aspect, our method not only surpasses existing unified audio-visual model on multiple tasks, but also outperforms most specialized models for certain tasks. Furthermore, we also visualize the process of explicit cooperation and surprisingly find that each LoRA head has certain audio-visual understanding ability. Code and dataset: https://github.com/GeWu-Lab/Crab
翻译:近年来,为促进模型发展特定视听场景理解能力,研究者提出了众多任务,主要可分为时间定位、空间定位、时空推理和像素级理解等类别。然而,人类对多样化任务具备统一的理解能力。因此,设计具有通用能力以统一这些任务的视听模型具有重要价值。但由于视听数据的异构性及任务间复杂关系,简单地对所有任务进行联合训练可能导致干扰。我们认为,通过任务间的显式协作可以解决这一问题。为实现该目标,我们提出一种统一学习方法,从数据和模型两个维度全面实现任务间的显式协作。具体而言,考虑到现有数据集的标注仅为简单词汇,我们精心重构这些数据集,构建了具有显式推理过程的视听统一指令微调数据集(AV-UIE),以阐明任务间的协作关系。随后,为在学习阶段实现具体协作,我们设计了具有多LoRA头的交互感知LoRA结构,以学习视听数据交互的不同方面。通过统一数据与模型层面的显式协作,我们的方法不仅在多项任务上超越了现有统一视听模型,还在特定任务上优于多数专用模型。此外,我们还对显式协作过程进行了可视化分析,并意外发现每个LoRA头都具备特定的视听理解能力。代码与数据集:https://github.com/GeWu-Lab/Crab