Inspired by recent developments in neural speech coding and diffusion-based language modeling, we tackle speech enhancement by modeling the conditional distribution of clean speech codes given noisy speech codes using absorbing discrete diffusion. The proposed approach, which we call ADDSE, leverages both the expressive latent space of neural audio codecs and the non-autoregressive sampling procedure of diffusion models. To efficiently model the hierarchical structure of residual vector quantization codes, we propose RQDiT, which combines techniques from RQ-Transformer and diffusion Transformers for non-autoregressive modeling. Results show competitive performance in terms of non-intrusive objective metrics on two datasets, especially at low signal-to-noise ratios and with few sampling steps. Code and audio examples are available online.
翻译:受神经语音编码和基于扩散的语言建模最新进展的启发,我们采用吸收离散扩散方法,对给定含噪语音码的纯净语音码的条件分布进行建模,从而解决语音增强问题。所提出的方法,我们称之为ADDSE,同时利用了神经音频编解码器的表达性潜在空间和扩散模型的非自回归采样过程。为了有效建模残差矢量量化码的层次结构,我们提出了RQDiT,该方法结合了RQ-Transformer和扩散Transformer的技术,用于非自回归建模。结果表明,在两个数据集上,该方法在非侵入式客观指标方面表现出具有竞争力的性能,尤其是在低信噪比和较少采样步数的情况下。代码和音频示例可在线获取。