Since the introduction of Generative Adversarial Networks (GANs) in speech synthesis, remarkable achievements have been attained. In a thorough exploration of vocoders, it has been discovered that audio waveforms can be generated at speeds exceeding real-time while maintaining high fidelity, achieved through the utilization of GAN-based models. Typically, the inputs to the vocoder consist of band-limited spectral information, which inevitably sacrifices high-frequency details. To address this, we adopt the full-band Mel spectrogram information as input, aiming to provide the vocoder with the most comprehensive information possible. However, previous studies have revealed that the use of full-band spectral information as input can result in the issue of over-smoothing, compromising the naturalness of the synthesized speech. To tackle this challenge, we propose VNet, a GAN-based neural vocoder network that incorporates full-band spectral information and introduces a Multi-Tier Discriminator (MTD) comprising multiple sub-discriminators to generate high-resolution signals. Additionally, we introduce an asymptotically constrained method that modifies the adversarial loss of the generator and discriminator, enhancing the stability of the training process. Through rigorous experiments, we demonstrate that the VNet model is capable of generating high-fidelity speech and significantly improving the performance of the vocoder.
翻译:自生成对抗网络(GAN)引入语音合成领域以来,已取得了显著成就。在对声码器的深入探索中发现,通过采用基于GAN的模型,可以在保持高保真度的同时以超过实时速度生成音频波形。通常,声码器的输入由带限频谱信息组成,这不可避免地牺牲了高频细节。为解决此问题,我们采用全频带梅尔频谱图信息作为输入,旨在为声码器提供尽可能全面的信息。然而,先前研究表明,使用全频带频谱信息作为输入可能导致过度平滑问题,损害合成语音的自然度。为应对这一挑战,我们提出了VNet——一种基于GAN的神经声码器网络,它融合全频带频谱信息,并引入由多个子判别器组成的多层判别器(MTD)以生成高分辨率信号。此外,我们提出一种渐近约束方法,通过修改生成器和判别器的对抗损失来增强训练过程的稳定性。通过严格实验,我们证明VNet模型能够生成高保真语音,并显著提升声码器的性能。