Dysarthric speech reconstruction (DSR) aims to transform dysarthric speech into normal speech. It still suffers from low speaker similarity and poor prosody naturalness. In this paper, we propose a multi-modal DSR model by leveraging neural codec language modeling to improve the reconstruction results, especially for the speaker similarity and prosody naturalness. Our proposed model consists of: (i) a multi-modal content encoder to extract robust phoneme embeddings from dysarthric speech with auxiliary visual inputs; (ii) a speaker codec encoder to extract and normalize the speaker-aware codecs from the dysarthric speech, in order to provide original timbre and normal prosody; (iii) a codec language model based speech decoder to reconstruct the speech based on the extracted phoneme embeddings and normalized codecs. Evaluations on the commonly used UASpeech corpus show that our proposed model can achieve significant improvements in terms of speaker similarity and prosody naturalness.
翻译:构音障碍语音重建(DSR)旨在将构音障碍语音转换为正常语音。现有方法仍面临说话人相似度低和韵律自然性差的问题。本文提出一种多模态DSR模型,通过利用神经编解码器语言建模来改善重建效果,特别是在说话人相似度和韵律自然性方面。我们提出的模型包含:(i)多模态内容编码器,通过辅助视觉输入从构音障碍语音中提取鲁棒的音素嵌入;(ii)说话人编解码器编码器,用于从构音障碍语音中提取并归一化说话人感知的编解码特征,以提供原始音色和正常韵律;(iii)基于编解码器语言建模的语音解码器,根据提取的音素嵌入和归一化编解码特征重建语音。在广泛使用的UASpeech语料库上的评估表明,我们提出的模型在说话人相似度和韵律自然性方面均能取得显著提升。