In recent years, Text-to-Audio Generation has achieved remarkable progress, offering sound creators powerful tools to transform textual inspirations into vivid audio. However, existing models predominantly operate directly in the acoustic latent space of a Variational Autoencoder (VAE), often leading to suboptimal alignment between generated audio and textual descriptions. In this paper, we introduce SemanticAudio, a novel framework that conducts both audio generation and editing directly in a high-level semantic space. We define this semantic space as a compact representation capturing the global identity and temporal sequence of sound events, distinct from fine-grained acoustic details. SemanticAudio employs a two-stage Flow Matching architecture: the Semantic Planner first generates these compact semantic features to sketch the global semantic layout, and the Acoustic Synthesizer subsequently produces high-fidelity acoustic latents conditioned on this semantic plan. Leveraging this decoupled design, we further introduce a training-free text-guided editing mechanism that enables precise attribute-level modifications on general audio without retraining. Specifically, this is achieved by steering the semantic generation trajectory via the difference of velocity fields derived from source and target text prompts. Extensive experiments demonstrate that SemanticAudio surpasses existing mainstream approaches in semantic alignment. Demo available at: https://semanticaudio1.github.io/
翻译:近年来,文本到音频生成技术取得了显著进展,为声音创作者提供了将文本灵感转化为生动音频的强大工具。然而,现有模型主要在变分自编码器(VAE)的声学隐空间内直接操作,这常常导致生成的音频与文本描述之间的对齐效果欠佳。本文提出SemanticAudio,一种直接在高层语义空间中进行音频生成与编辑的新框架。我们将该语义空间定义为一种紧凑表示,它捕捉声音事件的全局身份与时序结构,而非细粒度的声学细节。SemanticAudio采用两阶段流匹配架构:语义规划器首先生成这些紧凑的语义特征以勾勒全局语义布局,随后声学合成器基于此语义规划生成高保真声学隐变量。利用这种解耦设计,我们进一步提出一种免训练的文本引导编辑机制,能够对通用音频进行精确的属性级修改而无需重新训练。具体而言,这是通过基于源文本与目标文本提示推导的速度场差异来引导语义生成轨迹实现的。大量实验表明,SemanticAudio在语义对齐方面超越了现有主流方法。演示地址:https://semanticaudio1.github.io/