We introduce an open source high-quality Mandarin TTS dataset MSceneSpeech (Multiple Scene Speech Dataset), which is intended to provide resources for expressive speech synthesis. MSceneSpeech comprises numerous audio recordings and texts performed and recorded according to daily life scenarios. Each scenario includes multiple speakers and a diverse range of prosodic styles, making it suitable for speech synthesis that entails multi-speaker style and prosody modeling. We have established a robust baseline, through the prompting mechanism, that can effectively synthesize speech characterized by both user-specific timbre and scene-specific prosody with arbitrary text input. The open source MSceneSpeech Dataset and audio samples of our baseline are available at https://speechai-demo.github.io/MSceneSpeech/.
翻译:我们介绍了一个开源的高质量中文语音合成数据集MSceneSpeech(多场景语音数据集),旨在为富有表现力的语音合成提供资源。MSceneSpeech包含大量根据日常生活场景表演和录制的音频记录及对应文本。每个场景包含多位说话者和多样化的韵律风格,使其适用于需要进行多说话者风格与韵律建模的语音合成任务。我们通过提示机制建立了一个稳健的基线模型,该模型能够根据任意文本输入,有效合成兼具用户特定音色和场景特定韵律特征的语音。开源的MSceneSpeech数据集及我们基线模型的音频样本可在 https://speechai-demo.github.io/MSceneSpeech/ 获取。