Generative multimodal content is increasingly prevalent in much of the content creation arena, as it has the potential to allow artists and media personnel to create pre-production mockups by quickly bringing their ideas to life. The generation of audio from text prompts is an important aspect of such processes in the music and film industry. Many of the recent diffusion-based text-to-audio models focus on training increasingly sophisticated diffusion models on a large set of datasets of prompt-audio pairs. These models do not explicitly focus on the presence of concepts or events and their temporal ordering in the output audio with respect to the input prompt. Our hypothesis is focusing on how these aspects of audio generation could improve audio generation performance in the presence of limited data. As such, in this work, using an existing text-to-audio model Tango, we synthetically create a preference dataset where each prompt has a winner audio output and some loser audio outputs for the diffusion model to learn from. The loser outputs, in theory, have some concepts from the prompt missing or in an incorrect order. We fine-tune the publicly available Tango text-to-audio model using diffusion-DPO (direct preference optimization) loss on our preference dataset and show that it leads to improved audio output over Tango and AudioLDM2, in terms of both automatic- and manual-evaluation metrics.
翻译:生成式多模态内容在众多创作领域中日益普及,因其能够帮助艺术家与媒体工作者通过快速实现创意来制作前期制作样稿。在音乐与电影工业中,从文本提示生成音频是此类流程的重要环节。当前多数基于扩散的文本-音频生成模型侧重于在大量提示-音频配对数据集上训练日益复杂的扩散模型,这些模型并未显式关注输出音频中概念或事件的存在性及其相对于输入提示的时间顺序。我们的研究假设认为:关注音频生成的这些维度可在有限数据条件下提升生成性能。基于此,本研究以现有文本-音频模型Tango为基础,通过合成方式构建偏好数据集,其中每个提示均包含扩散模型可学习的优胜音频输出与若干劣质音频输出。理论上,劣质输出会缺失提示中的部分概念或存在概念顺序错误。我们采用扩散直接偏好优化损失在自建偏好数据集上对公开可用的Tango文本-音频模型进行微调,实验表明:在自动评估与人工评估指标上,优化后的模型在音频输出质量方面均优于原始Tango模型与AudioLDM2模型。