This work proposes an efficient method to enhance the quality of corrupted speech signals by leveraging both acoustic and visual cues. While existing diffusion-based approaches have demonstrated remarkable quality, their applicability is limited by slow inference speeds and computational complexity. To address this issue, we present FlowAVSE which enhances the inference speed and reduces the number of learnable parameters without degrading the output quality. In particular, we employ a conditional flow matching algorithm that enables the generation of high-quality speech in a single sampling step. Moreover, we increase efficiency by optimizing the underlying U-net architecture of diffusion-based systems. Our experiments demonstrate that FlowAVSE achieves 22 times faster inference speed and reduces the model size by half while maintaining the output quality. The demo page is available at: https://cyongong.github.io/FlowAVSE.github.io/
翻译:本文提出一种利用声学和视觉线索提升受损语音信号质量的高效方法。现有基于扩散模型的方法虽已展现出卓越的质量,但其推理速度缓慢与计算复杂度高限制了实际应用。为解决此问题,我们提出FlowAVSE模型,在保持输出质量的同时显著提升推理速度并减少可学习参数量。具体而言,我们采用条件流匹配算法,实现单步采样即可生成高质量语音。此外,通过优化基于扩散模型的U-net底层架构进一步提升了效率。实验表明,FlowAVSE在保持输出质量的同时,实现了22倍的推理加速与模型体积减半。演示页面详见:https://cyongong.github.io/FlowAVSE.github.io/