Recently, deep learning-based Text-to-Speech (TTS) systems have achieved high-quality speech synthesis results. Recurrent neural networks have become a standard modeling technique for sequential data in TTS systems and are widely used. However, training a TTS model which includes RNN components requires powerful GPU performance and takes a long time. In contrast, CNN-based sequence synthesis techniques can significantly reduce the parameters and training time of a TTS model while guaranteeing a certain performance due to their high parallelism, which alleviate these economic costs of training. In this paper, we propose a lightweight TTS system based on deep convolutional neural networks, which is a two-stage training end-to-end TTS model and does not employ any recurrent units. Our model consists of two stages: Text2Spectrum and SSRN. The former is used to encode phonemes into a coarse mel spectrogram and the latter is used to synthesize the complete spectrum from the coarse mel spectrogram. Meanwhile, we improve the robustness of our model by a series of data augmentations, such as noise suppression, time warping, frequency masking and time masking, for solving the low resource mongolian problem. Experiments show that our model can reduce the training time and parameters while ensuring the quality and naturalness of the synthesized speech compared to using mainstream TTS models. Our method uses NCMMSC2022-MTTSC Challenge dataset for validation, which significantly reduces training time while maintaining a certain accuracy.
翻译:近年来,基于深度学习的文本转语音(TTS)系统已实现高质量语音合成。循环神经网络(RNN)因其对序列数据的建模能力,成为TTS系统中的标准技术并得到广泛应用。然而,训练包含RNN组件的TTS模型需要强大的GPU性能且耗时较长。相比之下,基于卷积神经网络(CNN)的序列合成技术凭借其高并行性,能在保证一定性能的同时显著减少模型参数与训练时间,从而降低训练的经济成本。本文提出一种基于深度卷积神经网络的轻量级TTS系统,该系统采用两阶段训练的端到端模型,完全摒弃循环单元。我们的模型包含两个阶段:Text2Spectrum阶段将音素编码为粗粒度梅尔频谱图,SSRN阶段则从粗粒度梅尔频谱图合成完整频谱。同时,针对低资源蒙古语问题,我们通过一系列数据增强方法(包括噪声抑制、时间扭曲、频率掩蔽和时间掩蔽)提升模型鲁棒性。实验表明,与主流TTS模型相比,本模型在保证合成语音质量与自然度的前提下,可显著减少训练时间与参数量。我们在NCMMSC2022-MTTSC Challenge数据集上进行验证,结果表明该方法在保持一定准确率的同时大幅缩短了训练时间。