We investigate the feasibility of a singing voice synthesis (SVS) system by using a decomposed framework to improve flexibility in generating singing voices. Due to data-driven approaches, SVS performs a music score-to-waveform mapping; however, the direct mapping limits control, such as being able to only synthesize in the language or the singers present in the labeled singing datasets. As collecting large singing datasets labeled with music scores is an expensive task, we investigate an alternative approach by decomposing the SVS system and inferring different singing voice features. We decompose the SVS system into three-stage modules of linguistic, pitch contour, and synthesis, in which singing voice features such as linguistic content, F0, voiced/unvoiced, singer embeddings, and loudness are directly inferred from audio. Through this decomposed framework, we show that we can alleviate the labeled dataset requirements, adapt to different languages or singers, and inpaint the lyrical content of singing voices. Our investigations show that the framework has the potential to reach state-of-the-art in SVS, even though the model has additional functionality and improved flexibility. The comprehensive analysis of our investigated framework's current capabilities sheds light on the ways the research community can achieve a flexible and multifunctional SVS system.
翻译:本研究探讨了通过分解框架提升歌唱语音合成(SVS)系统灵活性的可行性。基于数据驱动的方法,SVS执行乐谱到波形的映射;然而,这种直接映射限制了控制能力,例如仅能合成标注歌唱数据集中存在的语言或歌手音色。由于收集带有乐谱标注的大规模歌唱数据集成本高昂,我们研究了一种替代方案:通过分解SVS系统并推断不同的歌唱语音特征。我们将SVS系统分解为语言学、音高轮廓和合成三个模块,其中语言内容、基频(F0)、清浊音状态、歌手嵌入和响度等歌唱特征直接从音频中推断。通过该分解框架,我们证明可以降低对标注数据集的需求,适应不同语言或歌手,并实现歌唱语音歌词内容的修复。研究表明,尽管模型增加了功能并提升了灵活性,该框架仍具备达到SVS领域先进水平的潜力。对本研究框架现有能力的综合分析,为学界实现灵活多功能的SVS系统提供了可行路径。