Speech event detection is crucial for multimedia retrieval, involving the tagging of both semantic and acoustic events. Traditional ASR systems often overlook the interplay between these events, focusing solely on content, even though the interpretation of dialogue can vary with environmental context. This paper tackles two primary challenges in speech event detection: the continual integration of new events without forgetting previous ones, and the disentanglement of semantic from acoustic events. We introduce a new task, continual event detection from speech, for which we also provide two benchmark datasets. To address the challenges of catastrophic forgetting and effective disentanglement, we propose a novel method, 'Double Mixture.' This method merges speech expertise with robust memory mechanisms to enhance adaptability and prevent forgetting. Our comprehensive experiments show that this task presents significant challenges that are not effectively addressed by current state-of-the-art methods in either computer vision or natural language processing. Our approach achieves the lowest rates of forgetting and the highest levels of generalization, proving robust across various continual learning sequences. Our code and data are available at https://anonymous.4open.science/status/Continual-SpeechED-6461.
翻译:语音事件检测对多媒体检索至关重要,涉及语义事件和声学事件的标注。传统自动语音识别系统往往忽略这两类事件间的交互,仅关注内容本身,尽管对话的解读会随环境语境而变化。本文针对语音事件检测中的两个核心挑战:在不遗忘旧事件的情况下持续整合新事件,以及语义事件与声学事件的解耦。我们提出一项新任务——面向语音的持续事件检测,并为此构建了两个基准数据集。为应对灾难性遗忘与有效解耦的难题,我们提出创新方法"双混合",该方法融合语音专业知识与稳健记忆机制,以增强适应性并防止遗忘。大量实验表明,该任务存在当前计算机视觉或自然语言处理领域最先进方法无法有效解决的显著挑战。我们的方法实现了最低遗忘率和最高泛化水平,在多种持续学习序列中展现出稳健性。代码与数据已开源至 https://anonymous.4open.science/status/Continual-SpeechED-6461。