Self-supervised speaker embeddings are widely used in speaker verification systems, but prior work has shown that they often encode sensitive demographic attributes, raising fairness and privacy concerns. This paper investigates the extent to which demographic information, specifically gender, age, and accent, is present in SimCLR-trained speaker embeddings and whether such leakage can be mitigated without severely degrading speaker verification performance. We study two debiasing strategies: adversarial training through gradient reversal and a causal bottleneck architecture that explicitly separates demographic and residual information. Demographic leakage is quantified using both linear and nonlinear probing classifiers, while speaker verification performance is evaluated using ROC-AUC and EER. Our results show that gender information is strongly and linearly encoded in baseline embeddings, whereas age and accent are weaker and primarily nonlinearly represented. Adversarial debiasing reduces gender leakage but has limited effect on age and accent and introduces a clear trade-off with verification accuracy. The causal bottleneck further suppresses demographic information, particularly in the residual representation, but incurs substantial performance degradation. These findings highlight fundamental limitations in mitigating demographic leakage in self-supervised speaker embeddings and clarify the trade-offs inherent in current debiasing approaches.
翻译:自监督说话人嵌入在说话人验证系统中得到广泛应用,但先前研究表明其常编码敏感的人口统计属性,引发公平性与隐私担忧。本文探究了SimCLR训练的说话人嵌入中人口统计信息(具体为性别、年龄与口音)的存在程度,以及能否在不严重降低说话人验证性能的前提下缓解此类信息泄露。我们研究了两种去偏策略:通过梯度反转进行的对抗训练,以及显式分离人口统计信息与残差信息的因果瓶颈架构。人口统计信息泄露通过线性与非线性探测分类器进行量化,而说话人验证性能则使用ROC-AUC与等错误率进行评估。结果表明,基线嵌入中性别信息被强烈且线性地编码,而年龄与口音信息较弱且主要呈非线性表征。对抗去偏降低了性别信息泄露,但对年龄与口音影响有限,并与验证精度形成明显权衡。因果瓶颈进一步抑制了人口统计信息(尤其在残差表征中),但导致了显著的性能下降。这些发现揭示了自监督说话人嵌入中缓解人口统计信息泄露的根本性局限,并阐明了当前去偏方法固有的权衡关系。