Distance Metric Learning (DML) has typically dominated the audio-visual speaker verification problem space, owing to strong performance in new and unseen classes. In our work, we explored multitask learning techniques to further enhance DML, and show that an auxiliary task with even weak labels can increase the quality of the learned speaker representation without increasing model complexity during inference. We also extend the Generalized End-to-End Loss (GE2E) to multimodal inputs and demonstrate that it can achieve competitive performance in an audio-visual space. Finally, we introduce AV-Mixup, a multimodal augmentation technique during training time that has shown to reduce speaker overfit. Our network achieves state of the art performance for speaker verification, reporting 0.244%, 0.252%, 0.441% Equal Error Rate (EER) on the VoxCeleb1-O/E/H test sets, which is to our knowledge, the best published results on VoxCeleb1-E and VoxCeleb1-H.
翻译:距离度量学习(DML)凭借其在新类别和未见类别上的强大性能,通常在视听说话人验证问题领域占据主导地位。在本研究中,我们探索了多任务学习技术以进一步增强DML,并证明即使使用弱标签的辅助任务也能提升所学说话人表征的质量,且不会增加推理时的模型复杂度。我们还将广义端到端损失(GE2E)扩展至多模态输入,并证明其能在视听领域实现有竞争力的性能。最后,我们提出了AV-Mixup——一种训练阶段的多模态数据增强技术,该技术已被证明能有效降低说话人过拟合。我们的网络在说话人验证任务上取得了最先进的性能,在VoxCeleb1-O/E/H测试集上分别报告了0.244%、0.252%、0.441%的等错误率(EER)。据我们所知,这是在VoxCeleb1-E和VoxCeleb1-H数据集上已发表的最佳结果。