Serialized Output Training (SOT) has showcased state-of-the-art performance in multi-talker speech recognition by sequentially decoding the speech of individual speakers. To address the challenging label-permutation issue, prior methods have relied on either the Permutation Invariant Training (PIT) or the time-based First-In-First-Out (FIFO) rule. This study presents a model-based serialization strategy that incorporates an auxiliary module into the Attention Encoder-Decoder architecture, autonomously identifying the crucial factors to order the output sequence of the speech components in multi-talker speech. Experiments conducted on the LibriSpeech and LibriMix databases reveal that our approach significantly outperforms the PIT and FIFO baselines in both 2-mix and 3-mix scenarios. Further analysis shows that the serialization module identifies dominant speech components in a mixture by factors including loudness and gender, and orders speech components based on the dominance score.
翻译:序列化输出训练(SOT)通过顺序解码各个说话者的语音,在多说话人语音识别中展现了最先进的性能。为解决具有挑战性的标签排列问题,先前的方法依赖于排列不变训练(PIT)或基于时间的先进先出(FIFO)规则。本研究提出了一种基于模型的序列化策略,该策略在注意力编码器-解码器架构中引入了一个辅助模块,能够自主识别关键因素,以对多说话人语音中各语音成分的输出序列进行排序。在LibriSpeech和LibriMix数据库上进行的实验表明,我们的方法在2人混合和3人混合场景中均显著优于PIT和FIFO基线。进一步分析表明,该序列化模块通过响度、性别等因素识别混合语音中的主导语音成分,并依据主导性得分对语音成分进行排序。