Joint audio-text models are widely used for music retrieval, yet they struggle with semantic phenomena such as negation. Negation is fundamental for distinguishing the absence (or presence) of musical elements (e.g., "with vocals" vs. "without vocals"), but current systems fail to represent this reliably. In this work, we investigate and mitigate this limitation by training CLAP models from scratch on the Million Song Dataset with LP-MusicCaps-MSD captions. We introduce negation through text augmentation and a dissimilarity-based contrastive loss, designed to explicitly separate original and negated captions in the joint embedding space. To evaluate progress, we propose two protocols that frame negation modeling as retrieval and binary classification tasks. Experiments demonstrate that both methods, individually and combined, improve negation handling while largely preserving retrieval performance.
翻译:联合音频-文本模型在音乐检索中应用广泛,但其在处理语义现象(如否定)方面存在困难。否定对于区分音乐元素的缺失(或存在)(例如“带人声”与“不带人声”)至关重要,然而现有系统无法可靠地表示此类关系。本研究通过使用LP-MusicCaps-MSD标注在百万歌曲数据集上从头训练CLAP模型,以探究并缓解这一局限性。我们通过文本增强和基于差异性的对比损失引入否定机制,该损失函数旨在联合嵌入空间中显式分离原始标注与否定标注。为评估进展,我们提出了两种评估方案,将否定建模构建为检索任务和二元分类任务。实验表明,两种方法单独使用及组合使用时,均在基本保持检索性能的同时,显著提升了否定处理能力。