Voice conversion is becoming increasingly popular, and a growing number of application scenarios require models with streaming inference capabilities. The recently proposed DualVC attempts to achieve this objective through streaming model architecture design and intra-model knowledge distillation along with hybrid predictive coding to compensate for the lack of future information. However, DualVC encounters several problems that limit its performance. First, the autoregressive decoder has error accumulation in its nature and limits the inference speed as well. Second, the causal convolution enables streaming capability but cannot sufficiently use future information within chunks. Third, the model is unable to effectively address the noise in the unvoiced segments, lowering the sound quality. In this paper, we propose DualVC 2 to address these issues. Specifically, the model backbone is migrated to a Conformer-based architecture, empowering parallel inference. Causal convolution is replaced by non-causal convolution with dynamic chunk mask to make better use of within-chunk future information. Also, quiet attention is introduced to enhance the model's noise robustness. Experiments show that DualVC 2 outperforms DualVC and other baseline systems in both subjective and objective metrics, with only 186.4 ms latency. Our audio samples are made publicly available.
翻译:语音转换日益流行,越来越多的应用场景要求模型具备流式推理能力。近期提出的DualVC通过流式模型架构设计、结合混合预测编码的模型内知识蒸馏来补偿未来信息缺失,试图实现这一目标。然而,DualVC存在若干制约性能的问题:首先,自回归解码器固有误差累积特性且限制推理速度;其次,因果卷积虽能实现流式能力,却无法充分使用分块内的未来信息;第三,模型难以有效处理清音段噪声,导致音质下降。本文提出DualVC 2以解决上述问题。具体而言,模型主干迁移至基于Conformer的架构,实现并行推理;采用带动态分块掩码的非因果卷积替代因果卷积,以更充分利用分块内未来信息;同时引入静默注意力机制增强模型噪声鲁棒性。实验表明,DualVC 2在主观和客观指标上均优于DualVC及其他基线系统,且仅有186.4毫秒延迟。我们的音频样本已公开提供。