We present a modular approach to building cascade speech translation (AST) models that guarantees that the resulting model performs no worse than the 1-best cascade baseline while preserving state-of-the-art speech recognition (ASR) and text translation (MT) performance for a given task. Our novel contribution is the use of an ``exporter'' layer that is trained under L2-loss to ensure a strong match between ASR embeddings and the MT token embeddings for the 1-best sequence. The ``exporter'' output embeddings are fed directly to the MT model in lieu of 1-best token embeddings, thus guaranteeing that the resulting model performs no worse than the 1-best cascade baseline, while allowing back-propagation gradient to flow from the MT model into the ASR components. The matched-embeddings cascade architecture provide a significant improvement over its 1-best counterpart in scenarios where incremental training of the MT model is not an option and yet we seek to improve quality by leveraging (speech, transcription, translated transcription) data provided with the AST task. The gain disappears when the MT model is incrementally trained on the parallel text data available with the AST task. The approach holds promise for other scenarios that seek to couple ASR encoders and immutable text models, such at large language models (LLM).
翻译:本文提出了一种构建级联语音翻译(AST)模型的模块化方法,该方法能确保最终模型的性能不低于最优级联基线,同时在给定任务中保持最先进的语音识别(ASR)和文本翻译(MT)性能。我们的核心创新在于引入一个"导出器"层,该层在L2损失函数下训练,以确保ASR嵌入与最优序列的MT词元嵌入之间实现强匹配。导出器输出的嵌入直接馈送至MT模型,替代最优词元嵌入,从而保证最终模型性能不低于最优级联基线,同时允许反向传播梯度从MT模型流向ASR组件。在无法对MT模型进行增量训练、但希望通过利用AST任务提供的(语音、转写文本、翻译文本)数据提升质量的场景中,这种匹配嵌入的级联架构相较其最优对应方案展现出显著改进。当MT模型利用AST任务提供的平行文本数据进行增量训练时,该增益则消失。该方法对其他需要耦合ASR编码器与不可变文本模型(例如大型语言模型LLM)的应用场景具有潜在价值。