We introduce Audio-Agent, a multimodal framework for audio generation, editing and composition based on text or video inputs. Conventional approaches for text-to-audio (TTA) tasks often make single-pass inferences from text descriptions. While straightforward, this design struggles to produce high-quality audio when given complex text conditions. In our method, we utilize a pre-trained TTA diffusion network as the audio generation agent to work in tandem with GPT-4, which decomposes the text condition into atomic, specific instructions, and calls the agent for audio generation. Consequently, Audio-Agent generates high-quality audio that is closely aligned with the provided text or video while also supporting variable-length generation. For video-to-audio (VTA) tasks, most existing methods require training a timestamp detector to synchronize video events with generated audio, a process that can be tedious and time-consuming. We propose a simpler approach by fine-tuning a pre-trained Large Language Model (LLM), e.g., Gemma2-2B-it, to obtain both semantic and temporal conditions to bridge video and audio modality. Thus our framework provides a comprehensive solution for both TTA and VTA tasks without substantial computational overhead in training.
翻译:本文介绍Audio-Agent,一种基于文本或视频输入进行音频生成、编辑与合成的多模态框架。传统的文本到音频(TTA)方法通常直接根据文本描述进行单次推理生成。尽管设计简单,但在处理复杂文本条件时,这种方案难以生成高质量音频。本方法采用预训练的TTA扩散网络作为音频生成智能体,与GPT-4协同工作:GPT-4将文本条件分解为原子化的具体指令,并调用音频生成智能体执行生成。由此,Audio-Agent能够生成与给定文本或视频高度契合的高质量音频,同时支持可变长度生成。对于视频到音频(VTA)任务,现有方法大多需要训练时间戳检测器以实现视频事件与生成音频的同步,这一过程通常繁琐耗时。我们提出一种更简化的方案:通过对预训练大语言模型(如Gemma2-2B-it)进行微调,同时获取语义与时间条件,从而桥接视频与音频模态。因此,本框架为TTA与VTA任务提供了完整的解决方案,且无需在训练中投入大量计算开销。