With the development of deep learning, many different network architectures have been explored in speaker verification. However, most network architectures rely on a single deep learning architecture, and hybrid networks combining different architectures have been little studied in ASV tasks. In this paper, we propose the GMM-ResNext model for speaker verification. Conventional GMM does not consider the score distribution of each frame feature over all Gaussian components and ignores the relationship between neighboring speech frames. So, we extract the log Gaussian probability features based on the raw acoustic features and use ResNext-based network as the backbone to extract the speaker embedding. GMM-ResNext combines Generative and Discriminative Models to improve the generalization ability of deep learning models and allows one to more easily specify meaningful priors on model parameters. A two-path GMM-ResNext model based on two gender-related GMMs has also been proposed. The Experimental results show that the proposed GMM-ResNext achieves relative improvements of 48.1\% and 11.3\% in EER compared with ResNet34 and ECAPA-TDNN on VoxCeleb1-O test set.
翻译:随着深度学习的发展,说话人验证领域已探索了许多不同的网络架构。然而,大多数网络架构依赖于单一的深度学习架构,而结合不同架构的混合网络在自动说话人验证任务中研究甚少。本文提出用于说话人验证的GMM-ResNext模型。传统高斯混合模型未考虑每帧特征在所有高斯分量上的得分分布,且忽略了相邻语音帧之间的关系。因此,我们基于原始声学特征提取对数高斯概率特征,并采用以ResNext为基础的网络作为主干来提取说话人嵌入。GMM-ResNext通过结合生成式与判别式模型,提升了深度学习模型的泛化能力,并允许更便捷地为模型参数指定有意义的先验分布。本文还提出了一种基于两个性别相关高斯混合模型的双路径GMM-ResNext模型。实验结果表明,在VoxCeleb1-O测试集上,所提出的GMM-ResNext模型相较于ResNet34和ECAPA-TDNN,在等错误率上分别实现了48.1%和11.3%的相对提升。