Zero-shot voice conversion (VC) aims to transform the source speaker timbre into an arbitrary unseen one without altering the original speech content.While recent advancements in zero-shot VC methods have shown remarkable progress, there still remains considerable potential for improvement in terms of improving speaker similarity and speech naturalness.In this paper, we propose Takin-VC, a novel zero-shot VC framework based on jointly hybrid content and memory-augmented context-aware timbre modeling to tackle this challenge. Specifically, an effective hybrid content encoder, guided by neural codec training, that leverages quantized features from pre-trained WavLM and HybridFormer is first presented to extract the linguistic content of the source speech. Subsequently, we introduce an advanced cross-attention-based context-aware timbre modeling approach that learns the fine-grained, semantically associated target timbre features. To further enhance both speaker similarity and real-time performance, we utilize a conditional flow matching model to reconstruct the Mel-spectrogram of the source speech. Additionally, we advocate an efficient memory-augmented module designed to generate high-quality conditional target inputs for the flow matching process, thereby improving the overall performance of the proposed system. Experimental results demonstrate that the proposed Takin-VC method surpasses state-of-the-art zero-shot VC systems, delivering superior performance in terms of both speech naturalness and speaker similarity.
翻译:零样本语音转换(VC)旨在将源说话人的音色转换为任意未见过的音色,同时保持原始语音内容不变。尽管零样本VC方法的最新进展已显示出显著进步,但在提升说话人相似度和语音自然度方面仍有相当大的改进潜力。本文提出Takin-VC,一种基于联合混合内容与记忆增强的上下文感知音色建模的新型零样本VC框架,以应对这一挑战。具体而言,我们首先提出一种由神经编解码器训练引导的有效混合内容编码器,它利用来自预训练WavLM和HybridFormer的量化特征来提取源语音的语言学内容。随后,我们引入一种先进的基于交叉注意力的上下文感知音色建模方法,该方法学习细粒度、语义关联的目标音色特征。为了进一步提升说话人相似度和实时性能,我们采用条件流匹配模型来重建源语音的梅尔频谱图。此外,我们提出一个高效的记忆增强模块,旨在为流匹配过程生成高质量的条件目标输入,从而提升所提出系统的整体性能。实验结果表明,所提出的Takin-VC方法超越了最先进的零样本VC系统,在语音自然度和说话人相似度方面均提供了更优的性能。