Visual information, such as subtitles in a movie, often helps automatic speech recognition. In this paper, we propose Donut-Whisper, an audio-visual ASR model with dual encoder to leverage visual information to improve speech recognition performance in both English and Chinese. Donut-Whisper combines the advantage of the linear and the Q-Former-based modality alignment structures via a cross-attention module, generating more powerful audio-visual features. Meanwhile, we propose a lightweight knowledge distillation scheme showcasing the potential of using audio-visual models to teach audio-only models to achieve better performance. Moreover, we propose a new multilingual audio-visual speech recognition dataset based on movie clips containing both Chinese and English partitions. As a result, Donut-Whisper achieved significantly better performance on both English and Chinese partition of the dataset compared to both Donut and Whisper large V3 baselines. In particular, an absolute 5.75% WER reduction and a 16.5% absolute CER reduction were achieved on the English and Chinese sets respectively compared to the Whisper ASR baseline.
翻译:视觉信息(如电影字幕)通常有助于提升自动语音识别性能。本文提出Donut-Whisper——一种采用双编码器的视听语音识别模型,通过利用视觉信息提升中英文场景下的语音识别性能。该模型通过交叉注意力模块融合线性模态对齐结构与基于Q-Former的模态对齐结构的优势,生成更具表现力的视听特征。同时,我们提出一种轻量级知识蒸馏方案,展示了使用视听模型指导纯音频模型以获得更优性能的潜力。此外,我们基于包含中英文片段的多语种电影剪辑构建了新的多语种视听语音识别数据集。实验结果表明,与Donut和Whisper large V3基线模型相比,Donut-Whisper在数据集的中英文部分均取得显著性能提升。相较于Whisper ASR基线,该模型在英文集上实现5.75%的绝对词错误率降低,在中文集上实现16.5%的绝对字错误率降低。