Controlling text-to-speech (TTS) systems to synthesize speech with the prosodic characteristics expected by users has attracted much attention. To achieve controllability, current studies focus on two main directions: (1) using reference speech as prosody prompt to guide speech synthesis, and (2) using natural language descriptions to control the generation process. However, finding reference speech that exactly contains the prosody that users want to synthesize takes a lot of effort. Description-based guidance in TTS systems can only determine the overall prosody, which has difficulty in achieving fine-grained prosody control over the synthesized speech. In this paper, we propose DrawSpeech, a sketch-conditioned diffusion model capable of generating speech based on any prosody sketches drawn by users. Specifically, the prosody sketches are fed to DrawSpeech to provide a rough indication of the expected prosody trends. DrawSpeech then recovers the detailed pitch and energy contours based on the coarse sketches and synthesizes the desired speech. Experimental results show that DrawSpeech can generate speech with a wide variety of prosody and can precisely control the fine-grained prosody in a user-friendly manner. Our implementation and audio samples are publicly available.
翻译:控制文本转语音(TTS)系统以合成具有用户期望的韵律特征的语音已引起广泛关注。为实现可控性,当前研究主要聚焦于两个方向:(1)使用参考语音作为韵律提示来引导语音合成;(2)使用自然语言描述来控制生成过程。然而,寻找完全包含用户想要合成的韵律的参考语音需要耗费大量精力。基于描述的TTS系统引导只能确定整体韵律,难以实现对合成语音的细粒度韵律控制。本文提出DrawSpeech,一种以草图为条件的扩散模型,能够基于用户绘制的任意韵律草图生成语音。具体而言,韵律草图被输入DrawSpeech以提供预期韵律趋势的粗略指示。DrawSpeech随后基于粗略草图恢复详细的基频和能量轮廓,并合成所需语音。实验结果表明,DrawSpeech能够生成具有多种韵律的语音,并能以用户友好的方式精确控制细粒度韵律。我们的实现和音频样本已公开提供。