Decoder-only discrete-token language models have recently achieved significant success in automatic speech recognition. However, systematic analyses of how different modalities impact performance in specific scenarios remain limited. In this paper, we investigate the effects of multiple modalities on recognition accuracy on both synthetic and real-world datasets. Our experiments suggest that: (1) Integrating more modalities can increase accuracy; in particular, our paper is, to our best knowledge, the first to show the benefit of combining audio, image context, and lip information; (2) Images as a supplementary modality for speech recognition provide the greatest benefit at moderate noise levels, moreover, they exhibit a different trend compared to inherently synchronized modalities like lip movements; (3) Performance improves on both synthetic and real-world datasets when the most relevant visual information is filtered as a preprocessing step.
翻译:仅解码器的离散标记语言模型在自动语音识别领域近期取得了显著成功。然而,关于不同模态在特定场景下如何影响性能的系统性分析仍然有限。本文研究了多模态在合成数据集和真实数据集上对识别准确率的影响。我们的实验表明:(1) 整合更多模态能够提升准确率;特别地,据我们所知,本文首次展示了结合音频、图像上下文与唇部信息的优势;(2) 图像作为语音识别的补充模态,在中等噪声水平下提供最大收益,且其表现趋势与唇部运动等固有同步模态存在差异;(3) 当最相关的视觉信息通过预处理步骤被筛选时,模型在合成与真实数据集上的性能均得到提升。