This paper presents a new voice conversion model capable of transforming both speaking and singing voices. It addresses key challenges in current systems, such as conveying emotions, managing pronunciation and accent changes, and reproducing non-verbal sounds. One of the model's standout features is its ability to perform accent conversion on hybrid voice samples that encompass both speech and singing, allowing it to change the speaker's accent while preserving the original content and prosody. The proposed model uses an encoder-decoder architecture: the encoder is based on HuBERT to process the speech's acoustic and linguistic content, while the HiFi-GAN decoder audio matches the target speaker's voice. The model incorporates fundamental frequency (f0) features and singer embeddings to enhance performance while ensuring the pitch & tone accuracy and vocal identity are preserved during transformation. This approach improves how naturally and flexibly voice style can be transformed, showing strong potential for applications in voice dubbing, content creation, and technologies like Text-to-Speech (TTS) and Interactive Voice Response (IVR) systems.
翻译:本文提出了一种新型音色转换模型,能够同时处理说话与歌唱声音的转换。该模型解决了当前系统中的关键挑战,例如情感传递、发音与口音变化管理以及非语言声音的再现。该模型的突出特点之一是能够对包含语音与歌唱的混合声音样本进行口音转换,在保持原始内容与韵律的同时改变说话者的口音。所提出的模型采用编码器-解码器架构:编码器基于HuBERT处理语音的声学与语言学内容,而HiFi-GAN解码器则生成与目标说话者音色匹配的音频。模型融合了基频(f0)特征与歌唱者嵌入向量以提升性能,同时确保转换过程中音高、音调准确性及声学身份特征的保留。该方法显著提升了音色风格转换的自然度与灵活性,在语音配音、内容创作以及文本转语音(TTS)、交互式语音应答(IVR)系统等技术领域展现出广阔的应用前景。