In audiovisual automatic speech recognition (AV-ASR) systems, information fusion of visual features in a pre-trained ASR has been proven as a promising method to improve noise robustness. In this work, based on the prominent Whisper ASR, first, we propose a simple and effective visual fusion method -- use of visual features both in encoder and decoder (dual-use) -- to learn the audiovisual interactions in the encoder and to weigh modalities in the decoder. Second, we compare visual fusion methods in Whisper models of various sizes. Our proposed dual-use method shows consistent noise robustness improvement, e.g., a 35% relative improvement (WER: 4.41% vs. 6.83%) based on Whisper small, and a 57% relative improvement (WER: 4.07% vs. 9.53%) based on Whisper medium, compared to typical reference middle fusion in babble noise with a signal-to-noise ratio (SNR) of 0dB. Third, we conduct ablation studies examining the impact of various module designs and fusion options. Fine-tuned on 1929 hours of audiovisual data, our dual-use method using Whisper medium achieves 4.08% (MUSAN babble noise) and 4.43% (NoiseX babble noise) average WER across various SNRs, thereby establishing a new state-of-the-art in noisy conditions on the LRS3 AV-ASR benchmark. Our code is at https://github.com/ifnspaml/Dual-Use-AVASR
翻译:在视听自动语音识别(AV-ASR)系统中,将视觉特征信息融合到预训练的ASR模型中已被证明是提高噪声鲁棒性的一种有效方法。在本工作中,基于性能卓越的Whisper ASR模型,我们首先提出了一种简单而有效的视觉特征融合方法——在编码器和解码器中同时使用视觉特征(双重使用)——以学习编码器中的视听交互,并在解码器中权衡不同模态的重要性。其次,我们比较了不同规模Whisper模型中的视觉特征融合方法。我们提出的双重使用方法在噪声鲁棒性方面表现出一致的提升,例如,基于Whisper small模型,在信噪比(SNR)为0dB的嘈杂人声噪声条件下,相比典型的中期融合参考方法,取得了35%的相对提升(词错误率WER:4.41% vs. 6.83%);基于Whisper medium模型,则取得了57%的相对提升(WER:4.07% vs. 9.53%)。第三,我们进行了消融实验,以检验不同模块设计和融合选项的影响。在1929小时的视听数据上进行微调后,我们采用Whisper medium的双重使用方法,在不同SNR条件下,在MUSAN嘈杂人声噪声和NoiseX嘈杂人声噪声中分别取得了平均4.08%和4.43%的WER,从而在LRS3 AV-ASR基准测试的噪声条件下建立了新的最优性能。我们的代码位于 https://github.com/ifnspaml/Dual-Use-AVASR。