With the rise of deep learning in various applications, privacy concerns around the protection of training data have become a critical area of research. Whereas prior studies have focused on privacy risks in single-modal models, we introduce a novel method to assess privacy for multi-modal models, specifically vision-language models like CLIP. The proposed Identity Inference Attack (IDIA) reveals whether an individual was included in the training data by querying the model with images of the same person. Letting the model choose from a wide variety of possible text labels, the model reveals whether it recognizes the person and, therefore, was used for training. Our large-scale experiments on CLIP demonstrate that individuals used for training can be identified with very high accuracy. We confirm that the model has learned to associate names with depicted individuals, implying the existence of sensitive information that can be extracted by adversaries. Our results highlight the need for stronger privacy protection in large-scale models and suggest that IDIAs can be used to prove the unauthorized use of data for training and to enforce privacy laws.
翻译:随着深度学习在各种应用中的兴起,训练数据保护相关的隐私问题已成为一个关键研究领域。先前的研究主要关注单模态模型的隐私风险,本文则提出了一种评估多模态模型(特别是像CLIP这样的视觉-语言模型)隐私风险的新方法。所提出的身份推断攻击(IDIA)通过向模型输入同一个人的多张图像,可推断该个体是否被包含在训练数据中。该方法让模型从大量可能的文本标签中进行选择,模型通过是否识别出该人物来间接揭示其是否曾被用于训练。我们在CLIP上进行的大规模实验表明,用于训练的个人身份能够以极高的准确率被识别。我们证实该模型已学会将姓名与描绘的个体相关联,这意味着攻击者可能提取到敏感信息。我们的研究结果凸显了大规模模型需要更强的隐私保护,并表明IDIA可用于证明未经授权将数据用于训练的行为,从而为隐私法规的执行提供依据。