Audio can disclose PII, particularly when combined with related text data. Therefore, it is essential to develop tools to detect privacy leakage in Contrastive Language-Audio Pretraining(CLAP). Existing MIAs need audio as input, risking exposure of voiceprint and requiring costly shadow models. To address these challenges, we propose USMID, a textual unimodal speaker-level membership inference detector for CLAP models, which queries the target model using only text data and does not require training shadow models. We randomly generate textual gibberish that are clearly not in training dataset. Then we extract feature vectors from these texts using the CLAP model and train a set of anomaly detectors on them. During inference, the feature vector of each test text is input into the anomaly detector to determine if the speaker is in the training set (anomalous) or not (normal). If available, USMID can further enhance detection by integrating real audio of the tested speaker. Extensive experiments on various CLAP model architectures and datasets demonstrate that USMID outperforms baseline methods using only text data.
翻译:音频可能泄露个人身份信息(PII),尤其在与相关文本数据结合时。因此,开发检测对比语言-音频预训练(CLAP)中隐私泄露的工具至关重要。现有成员推断攻击(MIA)方法需要音频作为输入,存在声纹暴露风险且需训练成本高昂的影子模型。为解决这些问题,我们提出USMID——一种面向CLAP模型的文本单模态说话人级别成员推断检测器,该方法仅使用文本数据查询目标模型,且无需训练影子模型。我们随机生成明显不在训练集中的无意义文本,利用CLAP模型提取这些文本的特征向量,并基于此训练一组异常检测器。在推断阶段,将每个测试文本的特征向量输入异常检测器,以判断说话人是否属于训练集(异常)或非训练集(正常)。若条件允许,USMID可通过融合被测说话人的真实音频数据进一步提升检测性能。在不同CLAP模型架构和数据集上的大量实验表明,USMID在仅使用文本数据的情况下优于基线方法。