Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting. While recognising the significance of CSS task, the prior studies have not thoroughly investigated the emotional expressiveness problems due to the scarcity of emotional conversational datasets and the difficulty of stateful emotion modeling. In this paper, we propose a novel emotional CSS model, termed ECSS, that includes two main components: 1) to enhance emotion understanding, we introduce a heterogeneous graph-based emotional context modeling mechanism, which takes the multi-source dialogue history as input to model the dialogue context and learn the emotion cues from the context; 2) to achieve emotion rendering, we employ a contrastive learning-based emotion renderer module to infer the accurate emotion style for the target utterance. To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity, and annotate additional emotional information on the existing conversational dataset (DailyTalk). Both objective and subjective evaluations suggest that our model outperforms the baseline models in understanding and rendering emotions. These evaluations also underscore the importance of comprehensive emotional annotations. Code and audio samples can be found at: https://github.com/walker-hyf/ECSS.
翻译:对话语音合成旨在在对话场景中准确表达语句,赋予其恰当的韵律和情感抑扬。尽管认识到对话语音合成任务的重要性,但先前研究由于缺乏情感对话数据集以及难以对状态级情感进行建模,未能深入探究情感表达问题。本文提出了一种新颖的情感对话语音合成模型,命名为ECSS,包含两个主要组件:1)为增强情感理解,我们引入了一种基于异构图的上下文情感建模机制,该机制以多源对话历史作为输入,建模对话上下文并从上下文中学习情感线索;2)为实现情感渲染,我们采用基于对比学习的情感渲染模块,为目标语句推断精确的情感风格。为解决数据稀缺问题,我们精心构建了类别和强度层面的情感标签,并在现有对话数据集DailyTalk上标注了额外情感信息。客观与主观评估表明,我们的模型在理解与渲染情感方面均优于基线模型。这些评估结果也凸显了全面情感标注的重要性。代码与音频示例请见:https://github.com/walker-hyf/ECSS。