We propose Sortformer, a novel neural model for speaker diarization, trained with unconventional objectives compared to existing end-to-end diarization models. The permutation problem in speaker diarization has long been regarded as a critical challenge. Most prior end-to-end diarization systems employ permutation invariant loss (PIL), which optimizes for the permutation that yields the lowest error. In contrast, we introduce Sort Loss, which enables a diarization model to autonomously resolve permutation, with or without PIL. We demonstrate that combining Sort Loss and PIL achieves performance competitive with state-of-the-art end-to-end diarization models trained exclusively with PIL. Crucially, we present a streamlined multispeaker ASR architecture that leverages Sortformer as a speaker supervision model, embedding speaker label estimation within the ASR encoder state using a sinusoidal kernel function. This approach resolves the speaker permutation problem through sorted objectives, effectively bridging speaker-label timestamps and speaker tokens. In our experiments, we show that the proposed multispeaker ASR architecture, enhanced with speaker supervision, improves performance via adapter techniques. Code and trained models will be made publicly available via the NVIDIA NeMo framework.
翻译:我们提出Sortformer,一种用于说话人日志的新型神经模型,其训练目标与现有端到端日志模型相比具有非传统性。说话人日志中的排列问题长期以来被视为一个关键挑战。大多数先前的端到端日志系统采用排列不变损失,该损失通过优化产生最低错误的排列进行训练。相比之下,我们引入了排序损失,该损失使日志模型能够自主解决排列问题,无论是否结合排列不变损失。我们证明,结合排序损失与排列不变损失所达到的性能,可与仅使用排列不变损失训练的最先进端到端日志模型相竞争。关键的是,我们提出了一种简化的多说话人自动语音识别架构,该架构利用Sortformer作为说话人监督模型,通过正弦核函数将说话人标签估计嵌入到自动语音识别编码器状态中。该方法通过排序目标解决了说话人排列问题,有效地桥接了说话人标签时间戳与说话人词元。在我们的实验中,我们表明,通过适配器技术增强说话人监督后,所提出的多说话人自动语音识别架构提升了性能。代码与训练模型将通过NVIDIA NeMo框架公开提供。