Text-to-audio (TTA) model is capable of generating diverse audio from textual prompts. However, most mainstream TTA models, which predominantly rely on Mel-spectrograms, still face challenges in producing audio with rich content. The intricate details and texture required in Mel-spectrograms for such audio often surpass the models' capacity, leading to outputs that are blurred or lack coherence. In this paper, we begin by investigating the critical role of U-Net in Mel-spectrogram generation. Our analysis shows that in U-Net structure, high-frequency components in skip-connections and the backbone influence texture and detail, while low-frequency components in the backbone are critical for the diffusion denoising process. We further propose ``Mel-Refine'', a plug-and-play approach that enhances Mel-spectrogram texture and detail by adjusting different component weights during inference. Our method requires no additional training or fine-tuning and is fully compatible with any diffusion-based TTA architecture. Experimental results show that our approach boosts performance metrics of the latest TTA model Tango2 by 25\%, demonstrating its effectiveness.
翻译:文本到音频(TTA)模型能够根据文本提示生成多样化的音频。然而,当前主流的TTA模型大多依赖梅尔频谱图,在生成内容丰富的音频时仍面临挑战。此类音频所需的梅尔频谱图中的复杂细节与纹理往往超出了模型的能力,导致输出结果模糊或缺乏连贯性。本文首先探究了U-Net在梅尔频谱图生成中的关键作用。我们的分析表明,在U-Net结构中,跳跃连接与主干网络中的高频分量影响纹理与细节,而主干网络中的低频分量对于扩散去噪过程至关重要。我们进一步提出了“Mel-Refine”,一种即插即用的方法,通过在推理阶段调整不同分量的权重来增强梅尔频谱图的纹理与细节。该方法无需额外的训练或微调,并且完全兼容任何基于扩散的TTA架构。实验结果表明,我们的方法将最新TTA模型Tango2的性能指标提升了25%,证明了其有效性。