In this paper, we introduce ConversaSynth, a framework designed to generate synthetic conversation audio using large language models (LLMs) with multiple persona settings. The framework first creates diverse and coherent text-based dialogues across various topics, which are then converted into audio using text-to-speech (TTS) systems. Our experiments demonstrate that ConversaSynth effectively generates highquality synthetic audio datasets, which can significantly enhance the training and evaluation of models for audio tagging, audio classification, and multi-speaker speech recognition. The results indicate that the synthetic datasets generated by ConversaSynth exhibit substantial diversity and realism, making them suitable for developing robust, adaptable audio-based AI systems.
翻译:本文介绍ConversaSynth,一个利用大型语言模型(LLMs)在多种人物设定下生成合成对话音频的框架。该框架首先生成跨不同主题的多样化且连贯的文本对话,随后通过文本转语音(TTS)系统将其转换为音频。我们的实验表明,ConversaSynth能够有效生成高质量的合成音频数据集,可显著增强音频标注、音频分类以及多说话人语音识别模型的训练与评估。结果表明,由ConversaSynth生成的合成数据集展现出显著的多样性与真实感,使其适用于开发鲁棒、可适应的基于音频的人工智能系统。