Training speech separation models in the supervised setting raises a permutation problem: finding the best assignation between the model predictions and the ground truth separated signals. This inherently ambiguous task is customarily solved using Permutation Invariant Training (PIT). In this article, we instead consider using the Multiple Choice Learning (MCL) framework, which was originally introduced to tackle ambiguous tasks. We demonstrate experimentally on the popular WSJ0-mix and LibriMix benchmarks that MCL matches the performances of PIT, while being computationally advantageous. This opens the door to a promising research direction, as MCL can be naturally extended to handle a variable number of speakers, or to tackle speech separation in the unsupervised setting.
翻译:在监督学习环境下训练语音分离模型会引发一个排列问题:如何确定模型预测结果与真实分离信号之间的最佳对应关系。这一本质模糊的任务通常通过排列不变训练(PIT)方法解决。本文提出采用多选学习(MCL)框架来处理该问题,该框架最初是为解决模糊任务而设计的。我们在广泛使用的WSJ0-mix和LibriMix基准数据集上的实验表明,MCL在保持与PIT相当性能的同时具有计算效率优势。这为语音分离研究开辟了有前景的新方向,因为MCL能够自然地扩展至处理可变说话者数量的场景,并适用于无监督环境下的语音分离任务。