Generative models have shown significant achievements in audio generation tasks. However, existing models struggle with complex and detailed prompts, leading to potential performance degradation. We hypothesize that this problem stems from the low quality and relatively small quantity of training data. In this work, we aim to create a large-scale audio dataset with rich captions for improving audio generation models. We develop an automated pipeline to generate detailed captions for audio-visual datasets by transforming predicted visual captions, audio captions, and tagging labels into comprehensive descriptions using a Large Language Model (LLM). We introduce Sound-VECaps, a dataset comprising 1.66M high-quality audio-caption pairs with enriched details including audio event orders, occurred places and environment information. We demonstrate that training with Sound-VECaps significantly enhances the capability of text-to-audio generation models to comprehend and generate audio from complex input prompts, improving overall system performance. Furthermore, we conduct ablation studies of Sound-VECaps across several audio-language tasks, suggesting its potential in advancing audio-text representation learning. Our dataset and models are available online.
翻译:生成模型在音频生成任务中已取得显著成就。然而,现有模型在处理复杂且详细的提示时存在困难,可能导致性能下降。我们假设该问题源于训练数据的质量较低且数量相对有限。本研究旨在构建一个具有丰富描述的大规模音频数据集,以改进音频生成模型。我们开发了一套自动化流程,通过使用大型语言模型将预测的视觉描述、音频描述与标签信息转化为综合性文本,从而为音视频数据集生成详细描述。我们提出了Sound-VECaps数据集,该数据集包含166万条高质量音频-描述对,其中包含音频事件顺序、发生场景及环境信息等增强细节。实验表明,使用Sound-VECaps进行训练能显著提升文本到音频生成模型对复杂输入提示的理解与生成能力,从而改善整体系统性能。此外,我们在多项音频-语言任务中对Sound-VECaps进行了消融研究,结果表明其在推进音频-文本表征学习方面具有潜力。我们的数据集与模型已开源发布。