Emotional voice conversion involves modifying the pitch, spectral envelope, and other acoustic characteristics of speech to match a desired emotional state while maintaining the speaker's identity. Recent advances in EVC involve simultaneously modeling pitch and duration by exploiting the potential of sequence-to-sequence models. In this study, we focus on parallel speech generation to increase the reliability and efficiency of conversion. We introduce a duration-flexible EVC (DurFlex-EVC) that integrates a style autoencoder and a unit aligner. The previous variable-duration parallel generation model required text-to-speech alignment. We consider self-supervised model representation and discrete speech units to be the core of our parallel generation. The style autoencoder promotes content style disentanglement by separating the source style of the input features and applying them with the target style. The unit aligner encodes unit-level features by modeling emotional context. Furthermore, we enhance the style of the features with a hierarchical stylize encoder and generate high-quality Mel-spectrograms with a diffusion-based generator. The effectiveness of the approach has been validated through subjective and objective evaluations and has been demonstrated to be superior to baseline models.
翻译:情感语音转换旨在修改语音的音高、频谱包络及其他声学特征,以匹配目标情感状态,同时保持说话人身份不变。近期情感语音转换的研究进展通过利用序列到序列模型的潜力,实现了音高与时长的同步建模。本研究聚焦于并行语音生成,以提升转换的可靠性与效率。我们提出了一种时长灵活的情感语音转换模型(DurFlex-EVC),该模型集成了风格自编码器与单元对齐器。先前的可变时长并行生成模型需要依赖文本到语音的对齐。我们将自监督模型表示与离散语音单元视为并行生成的核心。风格自编码器通过分离输入特征的源风格并应用目标风格,促进了内容与风格解耦。单元对齐器通过对情感上下文建模来编码单元级特征。此外,我们通过分层风格化编码器增强特征风格,并利用基于扩散的生成器合成高质量的梅尔频谱。该方法已通过主观与客观评估验证其有效性,并证明其性能优于基线模型。