Speech recognition and translation systems perform poorly on noisy inputs, which are frequent in realistic environments. Augmenting these systems with visual signals has the potential to improve robustness to noise. However, audio-visual (AV) data is only available in limited amounts and for fewer languages than audio-only resources. To address this gap, we present XLAVS-R, a cross-lingual audio-visual speech representation model for noise-robust speech recognition and translation in over 100 languages. It is designed to maximize the benefits of limited multilingual AV pre-training data, by building on top of audio-only multilingual pre-training and simplifying existing pre-training schemes. Extensive evaluation on the MuAViC benchmark shows the strength of XLAVS-R on downstream audio-visual speech recognition and translation tasks, where it outperforms the previous state of the art by up to 18.5% WER and 4.7 BLEU given noisy AV inputs, and enables strong zero-shot audio-visual ability with audio-only fine-tuning.
翻译:语音识别与翻译系统在噪声环境下的表现较差,而现实场景中噪声输入十分常见。通过视觉信号增强这些系统有望提升其对噪声的鲁棒性。然而,视听数据的可用数量有限,且语种覆盖范围远少于纯音频资源。为弥补这一差距,我们提出XLAVS-R——一种面向超过100种语言的跨语言视听语音表示模型,旨在通过结合纯音频多语言预训练并简化现有预训练方案,最大化有限多语言视听预训练数据的效益。在MuAViC基准上的广泛评估表明,XLAVS-R在下游的视听语音识别与翻译任务中表现出色:在有噪声的视听输入下,其WER最高降低18.5%,BLEU提升4.7个点;同时,仅通过纯音频微调即可实现强大的零样本视听能力。