In recent years, text-to-audio models have revolutionized the field of automatic audio generation. This paper investigates their application in generating synthetic datasets for training data-driven models. Specifically, this study analyzes the performance of two environmental sound classification systems trained with data generated from text-to-audio models. We considered three scenarios: a) augmenting the training dataset with data generated by text-to-audio models; b) using a mixed training dataset combining real and synthetic text-driven generated data; and c) using a training dataset composed entirely of synthetic audio. In all cases, the performance of the classification models was tested on real data. Results indicate that text-to-audio models are effective for dataset augmentation, with consistent performance when replacing a subset of the recorded dataset. However, the performance of the audio recognition models drops when relying entirely on generated audio.
翻译:近年来,文本到音频模型彻底改变了自动音频生成领域。本文研究了其在生成合成数据集以训练数据驱动模型方面的应用。具体而言,本研究分析了两个环境声音分类系统的性能,这些系统使用由文本到音频模型生成的数据进行训练。我们考虑了三种场景:a) 使用文本到音频模型生成的数据来扩增训练数据集;b) 使用结合了真实数据和文本驱动的合成生成数据的混合训练数据集;c) 使用完全由合成音频组成的训练数据集。在所有情况下,分类模型的性能均在真实数据上进行了测试。结果表明,文本到音频模型对于数据集扩增是有效的,在替换部分录制数据集时性能表现一致。然而,当完全依赖生成的音频时,音频识别模型的性能会下降。