In this paper, we describe the systems developed by the SJTU X-LANCE team for LIMMITS 2023 Challenge, and we mainly focus on the winning system on naturalness for track 1. The aim of this challenge is to build a multi-speaker multi-lingual text-to-speech (TTS) system for Marathi, Hindi and Telugu. Each of the languages has a male and a female speaker in the given dataset. In track 1, only 5 hours data from each speaker can be selected to train the TTS model. Our system is based on the recently proposed VQTTS that utilizes VQ acoustic feature rather than mel-spectrogram. We introduce additional speaker embeddings and language embeddings to VQTTS for controlling the speaker and language information. In the cross-lingual evaluations where we need to synthesize speech in a cross-lingual speaker's voice, we provide a native speaker's embedding to the acoustic model and the target speaker's embedding to the vocoder. In the subjective MOS listening test on naturalness, our system achieves 4.77 which ranks first.
翻译:本文介绍了上海交通大学X-LANCE团队为LIMMITS 2023挑战赛开发的系统,并重点介绍了在赛道1自然度评测中获胜的系统。该挑战赛的目标是为马拉地语、印地语和泰卢固语构建一个多说话人多语言文本转语音系统。给定数据集中,每种语言均包含一名男性和一名女性说话人。在赛道1中,仅允许选取每位说话人5小时的数据来训练TTS模型。我们的系统基于近期提出的VQTTS,该模型采用VQ声学特征而非梅尔频谱图。我们在VQTTS中引入了额外的说话人嵌入和语言嵌入,以控制说话人和语言信息。在跨语言评估中(即需要以跨语言说话人的音色合成语音),我们向声学模型提供源语言说话人的嵌入,而向声码器提供目标说话人的嵌入。在自然度的主观平均意见得分听力测试中,我们的系统获得了4.77分,排名第一。