In this paper, we introduce SoloAudio, a novel diffusion-based generative model for target sound extraction (TSE). Our approach trains latent diffusion models on audio, replacing the previous U-Net backbone with a skip-connected Transformer that operates on latent features. SoloAudio supports both audio-oriented and language-oriented TSE by utilizing a CLAP model as the feature extractor for target sounds. Furthermore, SoloAudio leverages synthetic audio generated by state-of-the-art text-to-audio models for training, demonstrating strong generalization to out-of-domain data and unseen sound events. We evaluate this approach on the FSD Kaggle 2018 mixture dataset and real data from AudioSet, where SoloAudio achieves the state-of-the-art results on both in-domain and out-of-domain data, and exhibits impressive zero-shot and few-shot capabilities. Source code and demos are released.
翻译:本文提出SoloAudio,一种基于扩散生成模型的新型目标声音提取方法。该方法在音频数据上训练隐扩散模型,将原有的U-Net主干网络替换为在隐特征上操作的跳跃连接Transformer。通过采用CLAP模型作为目标声音的特征提取器,SoloAudio同时支持音频导向和语言导向的目标声音提取。此外,SoloAudio利用最先进的文本到音频模型生成的合成音频进行训练,展现出对领域外数据和未见声音事件的强大泛化能力。我们在FSD Kaggle 2018混合数据集和AudioSet真实数据上评估该方法,SoloAudio在领域内和领域外数据上均取得最先进的结果,并展现出卓越的零样本和少样本学习能力。源代码和演示已公开。