Membership inference attacks (MIAs) are widely used to assess the privacy risks associated with machine learning models. However, when these attacks are applied to pre-trained large language models (LLMs), they encounter significant challenges, including mislabeled samples, distribution shifts, and discrepancies in model size between experimental and real-world settings. To address these limitations, we introduce tokenizers as a new attack vector for membership inference. Specifically, a tokenizer converts raw text into tokens for LLMs. Unlike full models, tokenizers can be efficiently trained from scratch, thereby avoiding the aforementioned challenges. In addition, the tokenizer's training data is typically representative of the data used to pre-train LLMs. Despite these advantages, the potential of tokenizers as an attack vector remains unexplored. To this end, we present the first study on membership leakage through tokenizers and explore five attack methods to infer dataset membership. Extensive experiments on millions of Internet samples reveal the vulnerabilities in the tokenizers of state-of-the-art LLMs. To mitigate this emerging risk, we further propose an adaptive defense. Our findings highlight tokenizers as an overlooked yet critical privacy threat, underscoring the urgent need for privacy-preserving mechanisms specifically designed for them.
翻译:成员推理攻击(MIAs)被广泛用于评估机器学习模型相关的隐私风险。然而,当这些攻击应用于预训练大语言模型(LLMs)时,它们面临着重大挑战,包括样本误标、分布偏移以及实验环境与真实场景之间的模型规模差异。为应对这些局限,我们引入分词器作为一种新的成员推理攻击向量。具体而言,分词器将原始文本转换为LLMs可处理的词元。与完整模型不同,分词器可以从头开始高效训练,从而避免了上述挑战。此外,分词器的训练数据通常能代表用于预训练LLMs的数据。尽管存在这些优势,分词器作为攻击向量的潜力仍未得到探索。为此,我们首次开展了关于通过分词器泄露成员信息的研究,并探索了五种推断数据集成员资格的攻击方法。基于数百万互联网样本的大规模实验揭示了当前最先进LLMs分词器中存在的脆弱性。为缓解这一新兴风险,我们进一步提出了一种自适应防御方法。我们的研究结果表明,分词器是一个被忽视但关键的隐私威胁,这强调了专门为其设计隐私保护机制的迫切需求。